14,940 research outputs found
A Deep Representation for Invariance And Music Classification
Representations in the auditory cortex might be based on mechanisms similar
to the visual ventral stream; modules for building invariance to
transformations and multiple layers for compositionality and selectivity. In
this paper we propose the use of such computational modules for extracting
invariant and discriminative audio representations. Building on a theory of
invariance in hierarchical architectures, we propose a novel, mid-level
representation for acoustical signals, using the empirical distributions of
projections on a set of templates and their transformations. Under the
assumption that, by construction, this dictionary of templates is composed from
similar classes, and samples the orbit of variance-inducing signal
transformations (such as shift and scale), the resulting signature is
theoretically guaranteed to be unique, invariant to transformations and stable
to deformations. Modules of projection and pooling can then constitute layers
of deep networks, for learning composite representations. We present the main
theoretical and computational aspects of a framework for unsupervised learning
of invariant audio representations, empirically evaluated on music genre
classification.Comment: 5 pages, CBMM Memo No. 002, (to appear) IEEE 2014 International
Conference on Acoustics, Speech, and Signal Processing (ICASSP 2014
Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers
Scene parsing, or semantic segmentation, consists in labeling each pixel in
an image with the category of the object it belongs to. It is a challenging
task that involves the simultaneous detection, segmentation and recognition of
all the objects in the image.
The scene parsing method proposed here starts by computing a tree of segments
from a graph of pixel dissimilarities. Simultaneously, a set of dense feature
vectors is computed which encodes regions of multiple sizes centered on each
pixel. The feature extractor is a multiscale convolutional network trained from
raw pixels. The feature vectors associated with the segments covered by each
node in the tree are aggregated and fed to a classifier which produces an
estimate of the distribution of object categories contained in the segment. A
subset of tree nodes that cover the image are then selected so as to maximize
the average "purity" of the class distributions, hence maximizing the overall
likelihood that each segment will contain a single object. The convolutional
network feature extractor is trained end-to-end from raw pixels, alleviating
the need for engineered features. After training, the system is parameter free.
The system yields record accuracies on the Stanford Background Dataset (8
classes), the Sift Flow Dataset (33 classes) and the Barcelona Dataset (170
classes) while being an order of magnitude faster than competing approaches,
producing a 320 \times 240 image labeling in less than 1 second.Comment: 9 pages, 4 figures - Published in 29th International Conference on
Machine Learning (ICML 2012), Jun 2012, Edinburgh, United Kingdo
Unsupervised Feature Learning by Deep Sparse Coding
In this paper, we propose a new unsupervised feature learning framework,
namely Deep Sparse Coding (DeepSC), that extends sparse coding to a multi-layer
architecture for visual object recognition tasks. The main innovation of the
framework is that it connects the sparse-encoders from different layers by a
sparse-to-dense module. The sparse-to-dense module is a composition of a local
spatial pooling step and a low-dimensional embedding process, which takes
advantage of the spatial smoothness information in the image. As a result, the
new method is able to learn several levels of sparse representation of the
image which capture features at a variety of abstraction levels and
simultaneously preserve the spatial smoothness between the neighboring image
patches. Combining the feature representations from multiple layers, DeepSC
achieves the state-of-the-art performance on multiple object recognition tasks.Comment: 9 pages, submitted to ICL
Provably scale-covariant networks from oriented quasi quadrature measures in cascade
This article presents a continuous model for hierarchical networks based on a
combination of mathematically derived models of receptive fields and
biologically inspired computations. Based on a functional model of complex
cells in terms of an oriented quasi quadrature combination of first- and
second-order directional Gaussian derivatives, we couple such primitive
computations in cascade over combinatorial expansions over image orientations.
Scale-space properties of the computational primitives are analysed and it is
shown that the resulting representation allows for provable scale and rotation
covariance. A prototype application to texture analysis is developed and it is
demonstrated that a simplified mean-reduced representation of the resulting
QuasiQuadNet leads to promising experimental results on three texture datasets.Comment: 12 pages, 3 figures, 1 tabl
A Novel Hybrid CNN-AIS Visual Pattern Recognition Engine
Machine learning methods are used today for most recognition problems.
Convolutional Neural Networks (CNN) have time and again proved successful for
many image processing tasks primarily for their architecture. In this paper we
propose to apply CNN to small data sets like for example, personal albums or
other similar environs where the size of training dataset is a limitation,
within the framework of a proposed hybrid CNN-AIS model. We use Artificial
Immune System Principles to enhance small size of training data set. A layer of
Clonal Selection is added to the local filtering and max pooling of CNN
Architecture. The proposed Architecture is evaluated using the standard MNIST
dataset by limiting the data size and also with a small personal data sample
belonging to two different classes. Experimental results show that the proposed
hybrid CNN-AIS based recognition engine works well when the size of training
data is limited in siz
- …