3,287 research outputs found
A second-derivative trust-region SQP method with a "trust-region-free" predictor step
In (NAR 08/18 and 08/21, Oxford University Computing Laboratory, 2008) we introduced a second-derivative SQP method (S2QP) for solving nonlinear nonconvex optimization problems. We proved that the method is globally convergent and locally superlinearly convergent under standard assumptions. A critical component of the algorithm is the so-called predictor step, which is computed from a strictly convex quadratic program with a trust-region constraint. This step is essential for proving global convergence, but its propensity to identify the optimal active set is Paramount for recovering fast local convergence. Thus the global and local efficiency of the method is intimately coupled with the quality of the predictor step.\ud
\ud
In this paper we study the effects of removing the trust-region constraint from the computation of the predictor step; this is reasonable since the resulting problem is still strictly convex and thus well-defined. Although this is an interesting theoretical question, our motivation is based on practicality. Our preliminary numerical experience with S2QP indicates that the trust-region constraint occasionally degrades the quality of the predictor step and diminishes its ability to correctly identify the optimal active set. Moreover, removal of the trust-region constraint allows for re-use of the predictor step over a sequence of failed iterations thus reducing computation. We show that the modified algorithm remains globally convergent and preserves local superlinear convergence provided a nonmonotone strategy is incorporated
Recommended from our members
A comparison of general-purpose optimization algorithms forfinding optimal approximate experimental designs
Several common general purpose optimization algorithms are compared for findingA- and D-optimal designs for different types of statistical models of varying complexity,including high dimensional models with five and more factors. The algorithms of interestinclude exact methods, such as the interior point method, the Nelder–Mead method, theactive set method, the sequential quadratic programming, and metaheuristic algorithms,such as particle swarm optimization, simulated annealing and genetic algorithms.Several simulations are performed, which provide general recommendations on theutility and performance of each method, including hybridized versions of metaheuristicalgorithms for finding optimal experimental designs. A key result is that general-purposeoptimization algorithms, both exact methods and metaheuristic algorithms, perform wellfor finding optimal approximate experimental designs
A sequential semidefinite programming method and an application in passive reduced-order modeling
We consider the solution of nonlinear programs with nonlinear
semidefiniteness constraints. The need for an efficient exploitation of the
cone of positive semidefinite matrices makes the solution of such nonlinear
semidefinite programs more complicated than the solution of standard nonlinear
programs. In particular, a suitable symmetrization procedure needs to be chosen
for the linearization of the complementarity condition. The choice of the
symmetrization procedure can be shifted in a very natural way to certain linear
semidefinite subproblems, and can thus be reduced to a well-studied problem.
The resulting sequential semidefinite programming (SSP) method is a
generalization of the well-known SQP method for standard nonlinear programs. We
present a sensitivity result for nonlinear semidefinite programs, and then
based on this result, we give a self-contained proof of local quadratic
convergence of the SSP method. We also describe a class of nonlinear
semidefinite programs that arise in passive reduced-order modeling, and we
report results of some numerical experiments with the SSP method applied to
problems in that class
Clustering Multiple Sclerosis Medication Sequence Data with Mixture Markov Chain Analysis with covariates using Multiple Simplex Constrained Optimization Routine (MSiCOR)
Multiple sclerosis (MS) is an autoimmune disease of the central nervous
system that causes neurodegeneration. While disease-modifying therapies (DMTs)
reduce inflammatory disease activity and delay worsening disability in MS,
there are significantly varying treatment responses across people with MS
(pwMS). pwMS often receive serial monotherapies of DMTs. Here, we propose a
novel method to cluster pwMS according to the sequence of DMT prescriptions and
associated clinical features (covariates). This is achieved via a mixture
Markov chain analysis with covariates, where the sequence of prescribed DMTs
for each patient is modeled as a Markov chain. Given the computational
challenges to maximize the mixture likelihood on the constrained parameter
space, we develop a pattern search-based global optimization technique which
can optimize any objective function on a collection of simplexes and shown to
outperform other related global optimization techniques. In simulation
experiments, the proposed method is shown to outperform the
Expectation-Maximization (EM) algorithm based method for clustering sequence
data without covariates. Based on the analysis, we divided MS patients into 3
clusters: inferon-beta dominated, multi-DMTs, and natalizumab dominated.
Further cluster-specific summaries of relevant covariates indicate patient
differences among the clusters. This method may guide the DMT prescription
sequence based on clinical features
A second derivative SQP method: theoretical issues
Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exact-Hessian SQP methods. In particular, the resulting quadratic programming (QP) subproblems are often nonconvex, and thus finding their global solutions may be computationally nonviable. This paper presents a second-derivative SQP method based on quadratic subproblems that are either convex, and thus may be solved efficiently, or need not be solved globally. Additionally, an explicit descent-constraint is imposed on certain QP subproblems, which “guides” the iterates through areas in which nonconvexity is a concern. Global convergence of the resulting algorithm is established
- …