521 research outputs found
Dimensionality Reduction Mappings
A wealth of powerful dimensionality reduction methods has been established which can be used for data visualization and preprocessing. These are accompanied by formal evaluation schemes, which allow a quantitative evaluation along general principles and which even lead to further visualization schemes based on these objectives. Most methods, however, provide a mapping of a priorly given finite set of points only, requiring additional steps for out-of-sample extensions. We propose a general view on dimensionality reduction based on the concept of cost functions, and, based on this general principle, extend dimensionality reduction to explicit mappings of the data manifold. This offers simple out-of-sample extensions. Further, it opens a way towards a theory of data visualization taking the perspective of its generalization ability to new data points. We demonstrate the approach based on a simple global linear mapping as well as prototype-based local linear mappings.
Mapping the conformations of biological assemblies
Mapping conformational heterogeneity of macromolecules presents a formidable
challenge to X-ray crystallography and cryo-electron microscopy, which often
presume its absence. This has severely limited our knowledge of the
conformations assumed by biological systems and their role in biological
function, even though they are known to be important. We propose a new approach
to determining to high resolution the three-dimensional conformations of
biological entities such as molecules, macromolecular assemblies, and
ultimately cells, with existing and emerging experimental techniques. This
approach may also enable one to circumvent current limits due to radiation
damage and solution purification.Comment: 14 pages, 6 figure
Representation Learning: A Review and New Perspectives
The success of machine learning algorithms generally depends on data
representation, and we hypothesize that this is because different
representations can entangle and hide more or less the different explanatory
factors of variation behind the data. Although specific domain knowledge can be
used to help design representations, learning with generic priors can also be
used, and the quest for AI is motivating the design of more powerful
representation-learning algorithms implementing such priors. This paper reviews
recent work in the area of unsupervised feature learning and deep learning,
covering advances in probabilistic models, auto-encoders, manifold learning,
and deep networks. This motivates longer-term unanswered questions about the
appropriate objectives for learning good representations, for computing
representations (i.e., inference), and the geometrical connections between
representation learning, density estimation and manifold learning
Advances in dissimilarity-based data visualisation
Gisbrecht A. Advances in dissimilarity-based data visualisation. Bielefeld: Universitätsbibliothek Bielefeld; 2015
Some steps towards a general principle for dimensionality reduction mappings
In the past years, many dimensionality reduction methods have been
established which allow to visualize high dimensional data sets. Recently,
also formal evaluation schemes have been proposed for data visualization,
which allow a quantitative evaluation along general principles. Most techniques
provide a mapping of a priorly given finite set of points only, requiring
additional steps for out-of-sample extensions. We propose a general
view on dimensionality reduction based on the concept of cost functions,
and, based on this general principle, extend dimensionality reduction to
explicit mappings of the data manifold. This offers the possibility of simple
out-of-sample extensions. Further, it opens a way towards a theory
of data visualization taking the perspective of its generalization ability
to new data points. We demonstrate the approach based in a simple
example
- …