12,071 research outputs found
AutoEncoder by Forest
Auto-encoding is an important task which is typically realized by deep neural
networks (DNNs) such as convolutional neural networks (CNN). In this paper, we
propose EncoderForest (abbrv. eForest), the first tree ensemble based
auto-encoder. We present a procedure for enabling forests to do backward
reconstruction by utilizing the equivalent classes defined by decision paths of
the trees, and demonstrate its usage in both supervised and unsupervised
setting. Experiments show that, compared with DNN autoencoders, eForest is able
to obtain lower reconstruction error with fast training speed, while the model
itself is reusable and damage-tolerable
Learning Discriminative Features with Class Encoder
Deep neural networks usually benefit from unsupervised pre-training, e.g.
auto-encoders. However, the classifier further needs supervised fine-tuning
methods for good discrimination. Besides, due to the limits of full-connection,
the application of auto-encoders is usually limited to small, well aligned
images. In this paper, we incorporate the supervised information to propose a
novel formulation, namely class-encoder, whose training objective is to
reconstruct a sample from another one of which the labels are identical.
Class-encoder aims to minimize the intra-class variations in the feature space,
and to learn a good discriminative manifolds on a class scale. We impose the
class-encoder as a constraint into the softmax for better supervised training,
and extend the reconstruction on feature-level to tackle the parameter size
issue and translation issue. The experiments show that the class-encoder helps
to improve the performance on benchmarks of classification and face
recognition. This could also be a promising direction for fast training of face
recognition models.Comment: Accepted by CVPR2016 Workshop of Robust Features for Computer Visio
Representation Learning: A Review and New Perspectives
The success of machine learning algorithms generally depends on data
representation, and we hypothesize that this is because different
representations can entangle and hide more or less the different explanatory
factors of variation behind the data. Although specific domain knowledge can be
used to help design representations, learning with generic priors can also be
used, and the quest for AI is motivating the design of more powerful
representation-learning algorithms implementing such priors. This paper reviews
recent work in the area of unsupervised feature learning and deep learning,
covering advances in probabilistic models, auto-encoders, manifold learning,
and deep networks. This motivates longer-term unanswered questions about the
appropriate objectives for learning good representations, for computing
representations (i.e., inference), and the geometrical connections between
representation learning, density estimation and manifold learning
Discriminative Recurrent Sparse Auto-Encoders
We present the discriminative recurrent sparse auto-encoder model, comprising
a recurrent encoder of rectified linear units, unrolled for a fixed number of
iterations, and connected to two linear decoders that reconstruct the input and
predict its supervised classification. Training via
backpropagation-through-time initially minimizes an unsupervised sparse
reconstruction error; the loss function is then augmented with a discriminative
term on the supervised classification. The depth implicit in the
temporally-unrolled form allows the system to exhibit all the power of deep
networks, while substantially reducing the number of trainable parameters.
From an initially unstructured network the hidden units differentiate into
categorical-units, each of which represents an input prototype with a
well-defined class; and part-units representing deformations of these
prototypes. The learned organization of the recurrent encoder is hierarchical:
part-units are driven directly by the input, whereas the activity of
categorical-units builds up over time through interactions with the part-units.
Even using a small number of hidden units per layer, discriminative recurrent
sparse auto-encoders achieve excellent performance on MNIST.Comment: Added clarifications suggested by reviewers. 15 pages, 10 figure
- …