1,580 research outputs found
Large scale musical instrument identification
In this paper, automatic musical instrument identification using a variety of classifiers is addressed. Experiments are performed on a large set of recordings that stem from 20 instrument classes. Several features from general audio data classification applications as well as MPEG-7 descriptors are measured for 1000 recordings. Branch-and-bound feature selection is applied in order to select the most discriminating features for instrument classification. The first classifier is based on non-negative matrix factorization (NMF) techniques, where training is performed for each audio class individually. A novel NMF testing method is proposed, where each recording is projected onto several training matrices, which have been Gram-Schmidt orthogonalized. Several NMF variants are utilized besides the standard NMF method, such as the local NMF and the sparse NMF. In addition, 3-layered multilayer perceptrons, normalized Gaussian radial basis function networks, and support vector machines employing a polynomial kernel have also been tested as classifiers. The classification accuracy is high, ranging between 88.7% to 95.3%, outperforming the state-of-the-art techniques tested in the aforementioned experiment
Invariance of Weight Distributions in Rectified MLPs
An interesting approach to analyzing neural networks that has received
renewed attention is to examine the equivalent kernel of the neural network.
This is based on the fact that a fully connected feedforward network with one
hidden layer, a certain weight distribution, an activation function, and an
infinite number of neurons can be viewed as a mapping into a Hilbert space. We
derive the equivalent kernels of MLPs with ReLU or Leaky ReLU activations for
all rotationally-invariant weight distributions, generalizing a previous result
that required Gaussian weight distributions. Additionally, the Central Limit
Theorem is used to show that for certain activation functions, kernels
corresponding to layers with weight distributions having mean and finite
absolute third moment are asymptotically universal, and are well approximated
by the kernel corresponding to layers with spherical Gaussian weights. In deep
networks, as depth increases the equivalent kernel approaches a pathological
fixed point, which can be used to argue why training randomly initialized
networks can be difficult. Our results also have implications for weight
initialization.Comment: ICML 201
Frivolous Units: Wider Networks Are Not Really That Wide
A remarkable characteristic of overparameterized deep neural networks (DNNs)
is that their accuracy does not degrade when the network's width is increased.
Recent evidence suggests that developing compressible representations is key
for adjusting the complexity of large networks to the learning task at hand.
However, these compressible representations are poorly understood. A promising
strand of research inspired from biology is understanding representations at
the unit level as it offers a more granular and intuitive interpretation of the
neural mechanisms. In order to better understand what facilitates increases in
width without decreases in accuracy, we ask: Are there mechanisms at the unit
level by which networks control their effective complexity as their width is
increased? If so, how do these depend on the architecture, dataset, and
training parameters? We identify two distinct types of "frivolous" units that
proliferate when the network's width is increased: prunable units which can be
dropped out of the network without significant change to the output and
redundant units whose activities can be expressed as a linear combination of
others. These units imply complexity constraints as the function the network
represents could be expressed by a network without them. We also identify how
the development of these units can be influenced by architecture and a number
of training factors. Together, these results help to explain why the accuracy
of DNNs does not degrade when width is increased and highlight the importance
of frivolous units toward understanding implicit regularization in DNNs
- …