65 research outputs found
Conditional Restricted Boltzmann Machines for Structured Output Prediction
Conditional Restricted Boltzmann Machines (CRBMs) are rich probabilistic
models that have recently been applied to a wide range of problems, including
collaborative filtering, classification, and modeling motion capture data.
While much progress has been made in training non-conditional RBMs, these
algorithms are not applicable to conditional models and there has been almost
no work on training and generating predictions from conditional RBMs for
structured output problems. We first argue that standard Contrastive
Divergence-based learning may not be suitable for training CRBMs. We then
identify two distinct types of structured output prediction problems and
propose an improved learning algorithm for each. The first problem type is one
where the output space has arbitrary structure but the set of likely output
configurations is relatively small, such as in multi-label classification. The
second problem is one where the output space is arbitrarily structured but
where the output space variability is much greater, such as in image denoising
or pixel labeling. We show that the new learning algorithms can work much
better than Contrastive Divergence on both types of problems
DeepLine: AutoML Tool for Pipelines Generation using Deep Reinforcement Learning and Hierarchical Actions Filtering
Automatic machine learning (AutoML) is an area of research aimed at
automating machine learning (ML) activities that currently require human
experts. One of the most challenging tasks in this field is the automatic
generation of end-to-end ML pipelines: combining multiple types of ML
algorithms into a single architecture used for end-to-end analysis of
previously-unseen data. This task has two challenging aspects: the first is the
need to explore a large search space of algorithms and pipeline architectures.
The second challenge is the computational cost of training and evaluating
multiple pipelines. In this study we present DeepLine, a reinforcement learning
based approach for automatic pipeline generation. Our proposed approach
utilizes an efficient representation of the search space and leverages past
knowledge gained from previously-analyzed datasets to make the problem more
tractable. Additionally, we propose a novel hierarchical-actions algorithm that
serves as a plugin, mediating the environment-agent interaction in deep
reinforcement learning problems. The plugin significantly speeds up the
training process of our model. Evaluation on 56 datasets shows that DeepLine
outperforms state-of-the-art approaches both in accuracy and in computational
cost
Recurrent Models of Visual Attention
Applying convolutional neural networks to large images is computationally
expensive because the amount of computation scales linearly with the number of
image pixels. We present a novel recurrent neural network model that is capable
of extracting information from an image or video by adaptively selecting a
sequence of regions or locations and only processing the selected regions at
high resolution. Like convolutional neural networks, the proposed model has a
degree of translation invariance built-in, but the amount of computation it
performs can be controlled independently of the input image size. While the
model is non-differentiable, it can be trained using reinforcement learning
methods to learn task-specific policies. We evaluate our model on several image
classification tasks, where it significantly outperforms a convolutional neural
network baseline on cluttered images, and on a dynamic visual control problem,
where it learns to track a simple object without an explicit training signal
for doing so
Empirical Bernstein stopping
Sampling is a popular way of scaling up machine learning algorithms to large datasets. The question often is how many samples are needed. Adaptive stopping algorithms monitor the performance in an online fashion and they can stop early, saving valuable resources. We consider problems where probabilistic guarantees are desired and demonstrate how recently-introduced empirical Bernstein bounds can be used to design stopping rules that are efficient. We provide upper bounds on the sample complexity of the new rules, as well as empirical results on model selection and boosting in the filtering setting
Playing Atari with Deep Reinforcement Learning
We present the first deep learning model to successfully learn control
policies directly from high-dimensional sensory input using reinforcement
learning. The model is a convolutional neural network, trained with a variant
of Q-learning, whose input is raw pixels and whose output is a value function
estimating future rewards. We apply our method to seven Atari 2600 games from
the Arcade Learning Environment, with no adjustment of the architecture or
learning algorithm. We find that it outperforms all previous approaches on six
of the games and surpasses a human expert on three of them.Comment: NIPS Deep Learning Workshop 201
- …