5,500 research outputs found
TopologyNet: Topology based deep convolutional neural networks for biomolecular property predictions
Although deep learning approaches have had tremendous success in image, video
and audio processing, computer vision, and speech recognition, their
applications to three-dimensional (3D) biomolecular structural data sets have
been hindered by the entangled geometric complexity and biological complexity.
We introduce topology, i.e., element specific persistent homology (ESPH), to
untangle geometric complexity and biological complexity. ESPH represents 3D
complex geometry by one-dimensional (1D) topological invariants and retains
crucial biological information via a multichannel image representation. It is
able to reveal hidden structure-function relationships in biomolecules. We
further integrate ESPH and convolutional neural networks to construct a
multichannel topological neural network (TopologyNet) for the predictions of
protein-ligand binding affinities and protein stability changes upon mutation.
To overcome the limitations to deep learning arising from small and noisy
training sets, we present a multitask topological convolutional neural network
(MT-TCNN). We demonstrate that the present TopologyNet architectures outperform
other state-of-the-art methods in the predictions of protein-ligand binding
affinities, globular protein mutation impacts, and membrane protein mutation
impacts.Comment: 20 pages, 8 figures, 5 table
Coarse-Graining Auto-Encoders for Molecular Dynamics
Molecular dynamics simulations provide theoretical insight into the
microscopic behavior of materials in condensed phase and, as a predictive tool,
enable computational design of new compounds. However, because of the large
temporal and spatial scales involved in thermodynamic and kinetic phenomena in
materials, atomistic simulations are often computationally unfeasible.
Coarse-graining methods allow simulating larger systems, by reducing the
dimensionality of the simulation, and propagating longer timesteps, by
averaging out fast motions. Coarse-graining involves two coupled learning
problems; defining the mapping from an all-atom to a reduced representation,
and the parametrization of a Hamiltonian over coarse-grained coordinates.
Multiple statistical mechanics approaches have addressed the latter, but the
former is generally a hand-tuned process based on chemical intuition. Here we
present Autograin, an optimization framework based on auto-encoders to learn
both tasks simultaneously. Autograin is trained to learn the optimal mapping
between all-atom and reduced representation, using the reconstruction loss to
facilitate the learning of coarse-grained variables. In addition, a
force-matching method is applied to variationally determine the coarse-grained
potential energy function. This procedure is tested on a number of model
systems including single-molecule and bulk-phase periodic simulations.Comment: 8 pages, 6 figure
Learning Deep Structured Models
Many problems in real-world applications involve predicting several random
variables which are statistically related. Markov random fields (MRFs) are a
great mathematical tool to encode such relationships. The goal of this paper is
to combine MRFs with deep learning algorithms to estimate complex
representations while taking into account the dependencies between the output
random variables. Towards this goal, we propose a training algorithm that is
able to learn structured models jointly with deep features that form the MRF
potentials. Our approach is efficient as it blends learning and inference and
makes use of GPU acceleration. We demonstrate the effectiveness of our
algorithm in the tasks of predicting words from noisy images, as well as
multi-class classification of Flickr photographs. We show that joint learning
of the deep features and the MRF parameters results in significant performance
gains.Comment: 11 pages including referenc
Kernel Belief Propagation
We propose a nonparametric generalization of belief propagation, Kernel
Belief Propagation (KBP), for pairwise Markov random fields. Messages are
represented as functions in a reproducing kernel Hilbert space (RKHS), and
message updates are simple linear operations in the RKHS. KBP makes none of the
assumptions commonly required in classical BP algorithms: the variables need
not arise from a finite domain or a Gaussian distribution, nor must their
relations take any particular parametric form. Rather, the relations between
variables are represented implicitly, and are learned nonparametrically from
training data. KBP has the advantage that it may be used on any domain where
kernels are defined (Rd, strings, groups), even where explicit parametric
models are not known, or closed form expressions for the BP updates do not
exist. The computational cost of message updates in KBP is polynomial in the
training data size. We also propose a constant time approximate message update
procedure by representing messages using a small number of basis functions. In
experiments, we apply KBP to image denoising, depth prediction from still
images, and protein configuration prediction: KBP is faster than competing
classical and nonparametric approaches (by orders of magnitude, in some cases),
while providing significantly more accurate results
Alchemical and structural distribution based representation for improved QML
We introduce a representation of any atom in any chemical environment for the
generation of efficient quantum machine learning (QML) models of common
electronic ground-state properties. The representation is based on scaled
distribution functions explicitly accounting for elemental and structural
degrees of freedom. Resulting QML models afford very favorable learning curves
for properties of out-of-sample systems including organic molecules,
non-covalently bonded protein side-chains, (HO)-clusters, as well as
diverse crystals. The elemental components help to lower the learning curves,
and, through interpolation across the periodic table, even enable "alchemical
extrapolation" to covalent bonding between elements not part of training, as
evinced for single, double, and triple bonds among main-group elements
Alchemical and structural distribution based representation for improved QML
We introduce a representation of any atom in any chemical environment for the
generation of efficient quantum machine learning (QML) models of common
electronic ground-state properties. The representation is based on scaled
distribution functions explicitly accounting for elemental and structural
degrees of freedom. Resulting QML models afford very favorable learning curves
for properties of out-of-sample systems including organic molecules,
non-covalently bonded protein side-chains, (HO)-clusters, as well as
diverse crystals. The elemental components help to lower the learning curves,
and, through interpolation across the periodic table, even enable "alchemical
extrapolation" to covalent bonding between elements not part of training, as
evinced for single, double, and triple bonds among main-group elements
- …