1,109 research outputs found
The LBFGS Quasi-Newtonian Method for Molecular Modeling Prion AGAAAAGA Amyloid Fibrils
Experimental X-ray crystallography, NMR (Nuclear Magnetic Resonance)
spectroscopy, dual polarization interferometry, etc are indeed very powerful
tools to determine the 3-Dimensional structure of a protein (including the
membrane protein); theoretical mathematical and physical computational
approaches can also allow us to obtain a description of the protein 3D
structure at a submicroscopic level for some unstable, noncrystalline and
insoluble proteins. X-ray crystallography finds the X-ray final structure of a
protein, which usually need refinements using theoretical protocols in order to
produce a better structure. This means theoretical methods are also important
in determinations of protein structures. Optimization is always needed in the
computer-aided drug design, structure-based drug design, molecular dynamics,
and quantum and molecular mechanics. This paper introduces some optimization
algorithms used in these research fields and presents a new theoretical
computational method - an improved LBFGS Quasi-Newtonian mathematical
optimization method - to produce 3D structures of Prion AGAAAAGA amyloid
fibrils (which are unstable, noncrystalline and insoluble), from the potential
energy minimization point of view. Because the NMR or X-ray structure of the
hydrophobic region AGAAAAGA of prion proteins has not yet been determined, the
model constructed by this paper can be used as a reference for experimental
studies on this region, and may be useful in furthering the goals of medicinal
chemistry in this field
On Reduced Input-Output Dynamic Mode Decomposition
The identification of reduced-order models from high-dimensional data is a
challenging task, and even more so if the identified system should not only be
suitable for a certain data set, but generally approximate the input-output
behavior of the data source. In this work, we consider the input-output dynamic
mode decomposition method for system identification. We compare excitation
approaches for the data-driven identification process and describe an
optimization-based stabilization strategy for the identified systems
Optimal low-rank approximations of Bayesian linear inverse problems
In the Bayesian approach to inverse problems, data are often informative,
relative to the prior, only on a low-dimensional subspace of the parameter
space. Significant computational savings can be achieved by using this subspace
to characterize and approximate the posterior distribution of the parameters.
We first investigate approximation of the posterior covariance matrix as a
low-rank update of the prior covariance matrix. We prove optimality of a
particular update, based on the leading eigendirections of the matrix pencil
defined by the Hessian of the negative log-likelihood and the prior precision,
for a broad class of loss functions. This class includes the F\"{o}rstner
metric for symmetric positive definite matrices, as well as the
Kullback-Leibler divergence and the Hellinger distance between the associated
distributions. We also propose two fast approximations of the posterior mean
and prove their optimality with respect to a weighted Bayes risk under
squared-error loss. These approximations are deployed in an offline-online
manner, where a more costly but data-independent offline calculation is
followed by fast online evaluations. As a result, these approximations are
particularly useful when repeated posterior mean evaluations are required for
multiple data sets. We demonstrate our theoretical results with several
numerical examples, including high-dimensional X-ray tomography and an inverse
heat conduction problem. In both of these examples, the intrinsic
low-dimensional structure of the inference problem can be exploited while
producing results that are essentially indistinguishable from solutions
computed in the full space
On Quasi-Newton Forward--Backward Splitting: Proximal Calculus and Convergence
We introduce a framework for quasi-Newton forward--backward splitting
algorithms (proximal quasi-Newton methods) with a metric induced by diagonal
rank- symmetric positive definite matrices. This special type of
metric allows for a highly efficient evaluation of the proximal mapping. The
key to this efficiency is a general proximal calculus in the new metric. By
using duality, formulas are derived that relate the proximal mapping in a
rank- modified metric to the original metric. We also describe efficient
implementations of the proximity calculation for a large class of functions;
the implementations exploit the piece-wise linear nature of the dual problem.
Then, we apply these results to acceleration of composite convex minimization
problems, which leads to elegant quasi-Newton methods for which we prove
convergence. The algorithm is tested on several numerical examples and compared
to a comprehensive list of alternatives in the literature. Our quasi-Newton
splitting algorithm with the prescribed metric compares favorably against
state-of-the-art. The algorithm has extensive applications including signal
processing, sparse recovery, machine learning and classification to name a few.Comment: arXiv admin note: text overlap with arXiv:1206.115
- …