2,921 research outputs found
Representation Learning: A Review and New Perspectives
The success of machine learning algorithms generally depends on data
representation, and we hypothesize that this is because different
representations can entangle and hide more or less the different explanatory
factors of variation behind the data. Although specific domain knowledge can be
used to help design representations, learning with generic priors can also be
used, and the quest for AI is motivating the design of more powerful
representation-learning algorithms implementing such priors. This paper reviews
recent work in the area of unsupervised feature learning and deep learning,
covering advances in probabilistic models, auto-encoders, manifold learning,
and deep networks. This motivates longer-term unanswered questions about the
appropriate objectives for learning good representations, for computing
representations (i.e., inference), and the geometrical connections between
representation learning, density estimation and manifold learning
Coarse-Graining Auto-Encoders for Molecular Dynamics
Molecular dynamics simulations provide theoretical insight into the
microscopic behavior of materials in condensed phase and, as a predictive tool,
enable computational design of new compounds. However, because of the large
temporal and spatial scales involved in thermodynamic and kinetic phenomena in
materials, atomistic simulations are often computationally unfeasible.
Coarse-graining methods allow simulating larger systems, by reducing the
dimensionality of the simulation, and propagating longer timesteps, by
averaging out fast motions. Coarse-graining involves two coupled learning
problems; defining the mapping from an all-atom to a reduced representation,
and the parametrization of a Hamiltonian over coarse-grained coordinates.
Multiple statistical mechanics approaches have addressed the latter, but the
former is generally a hand-tuned process based on chemical intuition. Here we
present Autograin, an optimization framework based on auto-encoders to learn
both tasks simultaneously. Autograin is trained to learn the optimal mapping
between all-atom and reduced representation, using the reconstruction loss to
facilitate the learning of coarse-grained variables. In addition, a
force-matching method is applied to variationally determine the coarse-grained
potential energy function. This procedure is tested on a number of model
systems including single-molecule and bulk-phase periodic simulations.Comment: 8 pages, 6 figure
- …