Basic Filters for Convolutional Neural Networks: Training or Design?
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 1 year, 7 months ago.

Basic Filters for Convolutional Neural Networks: Training or Design?
When convolutional neural networks are used to tackle learning problems based on time series, e.g., audio data, raw onedimensional data are commonly preprocessed to obtain spectrogram or melspectrogram coefficients, which are then used as input to the actual neural network. In this contribution, we investigate, both theoretically and experimentally, the influence of this preprocessing step on the network’s performance and pose the question, whether replacing it by applying adaptive or learned filters directly to the raw data, can improve learning success. The theoretical results show that approximately reproducing melspectrogram coefficients by applying adaptive filters and subsequent timeaveraging is in principle possible. On the other hand, extensive experimental work leads to the conclusion, that the invariance induced by melspectrogram coefficients is both desirable and hard to infer by the learning process. Thus, the results achieved by adaptive endtoend learning approaches are close to but slightly worse than results achieved by stateoftheart reference architectures using standard input coefficients derived from the spectrogram.
Basic Filters for Convolutional Neural Networks: Training or Design?
by Monika Doerfler, Thomas Grill, Roswitha Bammer, Arthur Flexer
https://arxiv.org/pdf/1709.02291v1.pdf
You must be logged in to reply to this topic.