30 research outputs found
The LBFGS Quasi-Newtonian Method for Molecular Modeling Prion AGAAAAGA Amyloid Fibrils
Experimental X-ray crystallography, NMR (Nuclear Magnetic Resonance)
spectroscopy, dual polarization interferometry, etc are indeed very powerful
tools to determine the 3-Dimensional structure of a protein (including the
membrane protein); theoretical mathematical and physical computational
approaches can also allow us to obtain a description of the protein 3D
structure at a submicroscopic level for some unstable, noncrystalline and
insoluble proteins. X-ray crystallography finds the X-ray final structure of a
protein, which usually need refinements using theoretical protocols in order to
produce a better structure. This means theoretical methods are also important
in determinations of protein structures. Optimization is always needed in the
computer-aided drug design, structure-based drug design, molecular dynamics,
and quantum and molecular mechanics. This paper introduces some optimization
algorithms used in these research fields and presents a new theoretical
computational method - an improved LBFGS Quasi-Newtonian mathematical
optimization method - to produce 3D structures of Prion AGAAAAGA amyloid
fibrils (which are unstable, noncrystalline and insoluble), from the potential
energy minimization point of view. Because the NMR or X-ray structure of the
hydrophobic region AGAAAAGA of prion proteins has not yet been determined, the
model constructed by this paper can be used as a reference for experimental
studies on this region, and may be useful in furthering the goals of medicinal
chemistry in this field
A Simple Sufficient Descent Method for Unconstrained Optimization
We develop a sufficient descent method for solving large-scale unconstrained optimization problems. At each iteration, the search direction is a linear combination of the gradient
at the current and the previous steps. An attractive property of this method is that the generated directions are always descent. Under some appropriate conditions, we show that the proposed
method converges globally. Numerical experiments on some unconstrained minimization problems
from CUTEr library are reported, which illustrate that the proposed method is promising
Effective Modified Hybrid Conjugate Gradient Method for Large-Scale Symmetric Nonlinear Equations
In this paper, we proposed hybrid conjugate gradient method using the convex combination of FR and PRP conjugate gradient methods for solving Large-scale symmetric nonlinear equations via Andrei approach with nonmonotone line search. Logical formula for obtaining the convex parameter using Newton and our proposed directions was also proposed. Under appropriate conditions global convergence was established. Reported numerical results show that the proposed method is very promising
A New Hybrid Approach for Solving Large-scale Monotone Nonlinear Equations
In this paper, a new hybrid conjugate gradient method for solving monotone nonlinear equations is introduced. The scheme is a combination of the Fletcher-Reeves (FR) and Polak-Ribiére-Polyak (PRP) conjugate gradient methods with the Solodov and Svaiter projection strategy. Using suitable assumptions, the global convergence of the scheme with monotone line search is provided. Lastly, a numerical experiment was used to enumerate the suitability of the proposed scheme for large-scale problems
Extension of Modified Polak-Ribière-Polyak Conjugate Gradient Method to Linear Equality Constraints Minimization Problems
Combining the Rosen gradient projection method with the two-term Polak-Ribière-Polyak (PRP) conjugate gradient method, we propose a two-term Polak-Ribière-Polyak (PRP) conjugate gradient projection method for solving linear equality constraints optimization problems. The proposed method possesses some attractive properties: (1) search direction generated by the proposed method is a feasible descent direction; consequently the generated iterates are feasible points; (2) the sequences of function are decreasing. Under some mild conditions, we show that it is globally convergent with Armijio-type line search. Preliminary numerical results show that the proposed method is promising
Don't be so Monotone: Relaxing Stochastic Line Search in Over-Parameterized Models
Recent works have shown that line search methods can speed up Stochastic
Gradient Descent (SGD) and Adam in modern over-parameterized settings. However,
existing line searches may take steps that are smaller than necessary since
they require a monotone decrease of the (mini-)batch objective function. We
explore nonmonotone line search methods to relax this condition and possibly
accept larger step sizes. Despite the lack of a monotonic decrease, we prove
the same fast rates of convergence as in the monotone case. Our experiments
show that nonmonotone methods improve the speed of convergence and
generalization properties of SGD/Adam even beyond the previous monotone line
searches. We propose a POlyak NOnmonotone Stochastic (PoNoS) method, obtained
by combining a nonmonotone line search with a Polyak initial step size.
Furthermore, we develop a new resetting technique that in the majority of the
iterations reduces the amount of backtracks to zero while still maintaining a
large initial step size. To the best of our knowledge, a first runtime
comparison shows that the epoch-wise advantage of line-search-based methods
gets reflected in the overall computational time
Some Unconstrained Optimization Methods
Although it is a very old theme, unconstrained optimization is an area which is always actual for many scientists. Today, the results of unconstrained optimization are applied in different branches of science, as well as generally in practice. Here, we present the line search techniques. Further, in this chapter we consider some unconstrained optimization methods. We try to present these methods but also to present some contemporary results in this area
A Globally Convergent Matrix-Free Method for Constrained Equations and Its Linear Convergence Rate
A matrix-free method for constrained equations is proposed, which is a combination of the well-known PRP (Polak-Ribière-Polyak) conjugate gradient method and the famous hyperplane projection method. The new method is not only derivative-free, but also completely matrix-free, and consequently, it can be applied to solve large-scale constrained equations. We obtain global convergence of the new method without any differentiability requirement on the constrained equations. Compared with the existing gradient methods for solving such problem, the new method possesses linear convergence rate under standard conditions, and a relax factor γ is attached in the update step to accelerate convergence. Preliminary numerical results show that it is promising in practice
Unconstrained Optimization Methods: Conjugate Gradient Methods and Trust-Region Methods
Here, we consider two important classes of unconstrained optimization methods: conjugate gradient methods and trust region methods. These two classes of methods are very interesting; it seems that they are never out of date. First, we consider conjugate gradient methods. We also illustrate the practical behavior of some conjugate gradient methods. Then, we study trust region methods. Considering these two classes of methods, we analyze some recent results