106,084 research outputs found
ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing
With the aim of developing a fast yet accurate algorithm for compressive
sensing (CS) reconstruction of natural images, we combine in this paper the
merits of two existing categories of CS methods: the structure insights of
traditional optimization-based methods and the speed of recent network-based
ones. Specifically, we propose a novel structured deep network, dubbed
ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm
(ISTA) for optimizing a general norm CS reconstruction model. To cast
ISTA into deep network form, we develop an effective strategy to solve the
proximal mapping associated with the sparsity-inducing regularizer using
nonlinear transforms. All the parameters in ISTA-Net (\eg nonlinear transforms,
shrinkage thresholds, step sizes, etc.) are learned end-to-end, rather than
being hand-crafted. Moreover, considering that the residuals of natural images
are more compressible, an enhanced version of ISTA-Net in the residual domain,
dubbed {ISTA-Net}, is derived to further improve CS reconstruction.
Extensive CS experiments demonstrate that the proposed ISTA-Nets outperform
existing state-of-the-art optimization-based and network-based CS methods by
large margins, while maintaining fast computational speed. Our source codes are
available: \textsl{http://jianzhang.tech/projects/ISTA-Net}.Comment: 10 pages, 6 figures, 4 Tables. To appear in CVPR 201
Unfolding Hidden Barriers by Active Enhanced Sampling
Collective variable (CV) or order parameter based enhanced sampling
algorithms have achieved great success due to their ability to efficiently
explore the rough potential energy landscapes of complex systems. However, the
degeneracy of microscopic configurations, originating from the orthogonal space
perpendicular to the CVs, is likely to shadow "hidden barriers" and greatly
reduce the efficiency of CV-based sampling. Here we demonstrate that systematic
machine learning CV, through enhanced sampling, can iteratively lift such
degeneracies on the fly. We introduce an active learning scheme that consists
of a parametric CV learner based on deep neural network and a CV-based enhanced
sampler. Our active enhanced sampling (AES) algorithm is capable of identifying
the least informative regions based on a historical sample, forming a positive
feedback loop between the CV learner and sampler. This approach is able to
globally preserve kinetic characteristics by incrementally enhancing both
sample completeness and CV quality.Comment: 5 pages, 3 figure
Transferable neural networks for enhanced sampling of protein dynamics
Variational auto-encoder frameworks have demonstrated success in reducing
complex nonlinear dynamics in molecular simulation to a single non-linear
embedding. In this work, we illustrate how this non-linear latent embedding can
be used as a collective variable for enhanced sampling, and present a simple
modification that allows us to rapidly perform sampling in multiple related
systems. We first demonstrate our method is able to describe the effects of
force field changes in capped alanine dipeptide after learning a model using
AMBER99. We further provide a simple extension to variational dynamics encoders
that allows the model to be trained in a more efficient manner on larger
systems by encoding the outputs of a linear transformation using time-structure
based independent component analysis (tICA). Using this technique, we show how
such a model trained for one protein, the WW domain, can efficiently be
transferred to perform enhanced sampling on a related mutant protein, the GTT
mutation. This method shows promise for its ability to rapidly sample related
systems using a single transferable collective variable and is generally
applicable to sets of related simulations, enabling us to probe the effects of
variation in increasingly large systems of biophysical interest.Comment: 20 pages, 10 figure
- β¦