208,553 research outputs found
Manifold Relevance Determination
In this paper we present a fully Bayesian latent variable model which
exploits conditional nonlinear(in)-dependence structures to learn an efficient
latent representation. The latent space is factorized to represent shared and
private information from multiple views of the data. In contrast to previous
approaches, we introduce a relaxation to the discrete segmentation and allow
for a "softly" shared latent space. Further, Bayesian techniques allow us to
automatically estimate the dimensionality of the latent spaces. The model is
capable of capturing structure underlying extremely high dimensional spaces.
This is illustrated by modelling unprocessed images with tenths of thousands of
pixels. This also allows us to directly generate novel images from the trained
model by sampling from the discovered latent spaces. We also demonstrate the
model by prediction of human pose in an ambiguous setting. Our Bayesian
framework allows us to perform disambiguation in a principled manner by
including latent space priors which incorporate the dynamic nature of the data.Comment: ICML201
Multi-view Learning as a Nonparametric Nonlinear Inter-Battery Factor Analysis
Factor analysis aims to determine latent factors, or traits, which summarize
a given data set. Inter-battery factor analysis extends this notion to multiple
views of the data. In this paper we show how a nonlinear, nonparametric version
of these models can be recovered through the Gaussian process latent variable
model. This gives us a flexible formalism for multi-view learning where the
latent variables can be used both for exploratory purposes and for learning
representations that enable efficient inference for ambiguous estimation tasks.
Learning is performed in a Bayesian manner through the formulation of a
variational compression scheme which gives a rigorous lower bound on the log
likelihood. Our Bayesian framework provides strong regularization during
training, allowing the structure of the latent space to be determined
efficiently and automatically. We demonstrate this by producing the first (to
our knowledge) published results of learning from dozens of views, even when
data is scarce. We further show experimental results on several different types
of multi-view data sets and for different kinds of tasks, including exploratory
data analysis, generation, ambiguity modelling through latent priors and
classification.Comment: 49 pages including appendi
Recommended from our members
Measuring category intuitiveness in unconstrained categorization tasks
What makes a category seem natural or intuitive? In this paper, an unsupervised categorization task was employed to examine observer agreement concerning the categorization of nine different stimulus sets. The stimulus sets were designed to capture different intuitions about classification structure. The main empirical index of category intuitiveness was the frequency of the preferred classification, for different stimulus sets. With 169 participants, and a within participants design, with some stimulus sets the most frequent classification was produced over 50 times and with others not more than two or three times. The main empirical finding was that cluster tightness was more important in determining category intuitiveness, than cluster separation. The results were considered in relation to the following models of unsupervised categorization: DIVA, the rational model, the simplicity model, SUSTAIN, an Unsupervised version of the Generalized Context Model (UGCM), and a simple geometric model based on similarity. DIVA, the geometric approach, SUSTAIN, and the UGCM provided good, though not perfect, fits. Overall, the present work highlights several theoretical and practical issues regarding unsupervised categorization and reveals weaknesses in some of the corresponding formal models
Recommended from our members
Predicting Category Intuitiveness With the Rational Model, the Simplicity Model, and the Generalized Context Model
Naïve observers typically perceive some groupings for a set of stimuli as more intuitive than others. The problem of predicting category intuitiveness has been historically considered the remit of models of unsupervised categorization. In contrast, this article develops a measure of category intuitiveness from one of the most widely supported models of supervised categorization, the generalized context model (GCM). Considering different category assignments for a set of instances, the authors asked how well the GCM can predict the classification of each instance on the basis of all the other instances. The category assignment that results in the smallest prediction error is interpreted as the most intuitive for the GCM—the authors refer to this way of applying the GCM as “unsupervised GCM.” The authors systematically compared predictions of category intuitiveness from the unsupervised GCM and two models of unsupervised categorization: the simplicity model and the rational model. The unsupervised GCM compared favorably with the simplicity model and the rational model. This success of the unsupervised GCM illustrates that the distinction between supervised and unsupervised categorization may need to be reconsidered. However, no model emerged as clearly superior, indicating that there is more work to be done in understanding and modeling category intuitiveness
CASSL: Curriculum Accelerated Self-Supervised Learning
Recent self-supervised learning approaches focus on using a few thousand data
points to learn policies for high-level, low-dimensional action spaces.
However, scaling this framework for high-dimensional control require either
scaling up the data collection efforts or using a clever sampling strategy for
training. We present a novel approach - Curriculum Accelerated Self-Supervised
Learning (CASSL) - to train policies that map visual information to high-level,
higher- dimensional action spaces. CASSL orders the sampling of training data
based on control dimensions: the learning and sampling are focused on few
control parameters before other parameters. The right curriculum for learning
is suggested by variance-based global sensitivity analysis of the control
space. We apply our CASSL framework to learning how to grasp using an adaptive,
underactuated multi-fingered gripper, a challenging system to control. Our
experimental results indicate that CASSL provides significant improvement and
generalization compared to baseline methods such as staged curriculum learning
(8% increase) and complete end-to-end learning with random exploration (14%
improvement) tested on a set of novel objects
Design agents and the need for high-dimensional perception
Designed artefacts may be quantified by any number of measures. This paper aims to show that in doing so, the particular measures used may matter very little, but as many as possible should be taken. A set of building plans is used to demonstrate that arbitrary measures of their shape serve to classify them into neighbourhood types, and the accuracy of classification increases as more are used, even if the dimensionality of the space in which classification occurs is held constant. It is further shown that two autonomous agents may independently choose sets of attributes by which to represent the buildings, but arrive at similar judgements as more are used. This has several implications for studying or simulating design. It suggests that quantitative studies of collections of artefacts may be made without requiring extensive knowledge of the best possible measures—often impossible in real, ill-defined, design situations. It suggests a means by which the generation of novelty can be explained in a group of agents with different ways of seeing a given event. It also suggests that communication can occur without the need for predetermined codes or protocols, introducing the possibility of alternative human-computer interfaces that may be useful in design
- …