4,763 research outputs found
Phase Harmonic Correlations and Convolutional Neural Networks
A major issue in harmonic analysis is to capture the phase dependence of
frequency representations, which carries important signal properties. It seems
that convolutional neural networks have found a way. Over time-series and
images, convolutional networks often learn a first layer of filters which are
well localized in the frequency domain, with different phases. We show that a
rectifier then acts as a filter on the phase of the resulting coefficients. It
computes signal descriptors which are local in space, frequency and phase. The
non-linear phase filter becomes a multiplicative operator over phase harmonics
computed with a Fourier transform along the phase. We prove that it defines a
bi-Lipschitz and invertible representation. The correlations of phase harmonics
coefficients characterise coherent structures from their phase dependence
across frequencies. For wavelet filters, we show numerically that signals
having sparse wavelet coefficients can be recovered from few phase harmonic
correlations, which provide a compressive representationComment: 26 pages, 8 figure
Discriminative Recurrent Sparse Auto-Encoders
We present the discriminative recurrent sparse auto-encoder model, comprising
a recurrent encoder of rectified linear units, unrolled for a fixed number of
iterations, and connected to two linear decoders that reconstruct the input and
predict its supervised classification. Training via
backpropagation-through-time initially minimizes an unsupervised sparse
reconstruction error; the loss function is then augmented with a discriminative
term on the supervised classification. The depth implicit in the
temporally-unrolled form allows the system to exhibit all the power of deep
networks, while substantially reducing the number of trainable parameters.
From an initially unstructured network the hidden units differentiate into
categorical-units, each of which represents an input prototype with a
well-defined class; and part-units representing deformations of these
prototypes. The learned organization of the recurrent encoder is hierarchical:
part-units are driven directly by the input, whereas the activity of
categorical-units builds up over time through interactions with the part-units.
Even using a small number of hidden units per layer, discriminative recurrent
sparse auto-encoders achieve excellent performance on MNIST.Comment: Added clarifications suggested by reviewers. 15 pages, 10 figure
Domain Adaptive Neural Networks for Object Recognition
We propose a simple neural network model to deal with the domain adaptation
problem in object recognition. Our model incorporates the Maximum Mean
Discrepancy (MMD) measure as a regularization in the supervised learning to
reduce the distribution mismatch between the source and target domains in the
latent space. From experiments, we demonstrate that the MMD regularization is
an effective tool to provide good domain adaptation models on both SURF
features and raw image pixels of a particular image data set. We also show that
our proposed model, preceded by the denoising auto-encoder pretraining,
achieves better performance than recent benchmark models on the same data sets.
This work represents the first study of MMD measure in the context of neural
networks
- …