2,253 research outputs found
Representation Learning: A Review and New Perspectives
The success of machine learning algorithms generally depends on data
representation, and we hypothesize that this is because different
representations can entangle and hide more or less the different explanatory
factors of variation behind the data. Although specific domain knowledge can be
used to help design representations, learning with generic priors can also be
used, and the quest for AI is motivating the design of more powerful
representation-learning algorithms implementing such priors. This paper reviews
recent work in the area of unsupervised feature learning and deep learning,
covering advances in probabilistic models, auto-encoders, manifold learning,
and deep networks. This motivates longer-term unanswered questions about the
appropriate objectives for learning good representations, for computing
representations (i.e., inference), and the geometrical connections between
representation learning, density estimation and manifold learning
Joint Visual Denoising and Classification using Deep Learning
Visual restoration and recognition are traditionally addressed in pipeline
fashion, i.e. denoising followed by classification. Instead, observing
correlations between the two tasks, for example clearer image will lead to
better categorization and vice visa, we propose a joint framework for visual
restoration and recognition for handwritten images, inspired by advances in
deep autoencoder and multi-modality learning. Our model is a 3-pathway deep
architecture with a hidden-layer representation which is shared by multi-inputs
and outputs, and each branch can be composed of a multi-layer deep model. Thus,
visual restoration and classification can be unified using shared
representation via non-linear mapping, and model parameters can be learnt via
backpropagation. Using MNIST and USPS data corrupted with structured noise, the
proposed framework performs at least 20\% better in classification than
separate pipelines, as well as clearer recovered images. The noise model and
the reproducible source code is available at
{\url{https://github.com/ganggit/jointmodel}}.Comment: 5 pages, 7 figures, ICIP 201
Deep Self-Taught Learning for Handwritten Character Recognition
Recent theoretical and empirical work in statistical machine learning has
demonstrated the importance of learning algorithms for deep architectures,
i.e., function classes obtained by composing multiple non-linear
transformations. Self-taught learning (exploiting unlabeled examples or
examples from other distributions) has already been applied to deep learners,
but mostly to show the advantage of unlabeled examples. Here we explore the
advantage brought by {\em out-of-distribution examples}. For this purpose we
developed a powerful generator of stochastic variations and noise processes for
character images, including not only affine transformations but also slant,
local elastic deformations, changes in thickness, background images, grey level
changes, contrast, occlusion, and various types of noise. The
out-of-distribution examples are obtained from these highly distorted images or
by including examples of object classes different from those in the target test
set. We show that {\em deep learners benefit more from out-of-distribution
examples than a corresponding shallow learner}, at least in the area of
handwritten character recognition. In fact, we show that they beat previously
published results and reach human-level performance on both handwritten digit
classification and 62-class handwritten character recognition
- …