3,896 research outputs found
Implementing a smooth exact penalty function for equality-constrained nonlinear optimization
We develop a general equality-constrained nonlinear optimization algorithm
based on a smooth penalty function proposed by Fletcher (1970). Although it was
historically considered to be computationally prohibitive in practice, we
demonstrate that the computational kernels required are no more expensive than
other widely accepted methods for nonlinear optimization. The main kernel
required to evaluate the penalty function and its derivatives is solving a
structured linear system. We show how to solve this system efficiently by
storing a single factorization each iteration when the matrices are available
explicitly. We further show how to adapt the penalty function to the class of
factorization-free algorithms by solving the linear system iteratively. The
penalty function therefore has promise when the linear system can be solved
efficiently, e.g., for PDE-constrained optimization problems where efficient
preconditioners exist. We discuss extensions including handling simple
constraints explicitly, regularizing the penalty function, and inexact
evaluation of the penalty function and its gradients. We demonstrate the merits
of the approach and its various features on some nonlinear programs from a
standard test set, and some PDE-constrained optimization problems
A Generic Path Algorithm for Regularized Statistical Estimation
Regularization is widely used in statistics and machine learning to prevent
overfitting and gear solution towards prior information. In general, a
regularized estimation problem minimizes the sum of a loss function and a
penalty term. The penalty term is usually weighted by a tuning parameter and
encourages certain constraints on the parameters to be estimated. Particular
choices of constraints lead to the popular lasso, fused-lasso, and other
generalized penalized regression methods. Although there has been a lot
of research in this area, developing efficient optimization methods for many
nonseparable penalties remains a challenge. In this article we propose an exact
path solver based on ordinary differential equations (EPSODE) that works for
any convex loss function and can deal with generalized penalties as well
as more complicated regularization such as inequality constraints encountered
in shape-restricted regressions and nonparametric density estimation. In the
path following process, the solution path hits, exits, and slides along the
various constraints and vividly illustrates the tradeoffs between goodness of
fit and model parsimony. In practice, the EPSODE can be coupled with AIC, BIC,
or cross-validation to select an optimal tuning parameter. Our
applications to generalized regularized generalized linear models,
shape-restricted regressions, Gaussian graphical models, and nonparametric
density estimation showcase the potential of the EPSODE algorithm.Comment: 28 pages, 5 figure
Implementation of novel methods of global and nonsmooth optimization : GANSO programming library
We discuss the implementation of a number of modern methods of global and nonsmooth continuous optimization, based on the ideas of Rubinov, in a programming library GANSO. GANSO implements the derivative-free bundle method, the extended cutting angle method, dynamical system-based optimization and their various combinations and heuristics. We outline the main ideas behind each method, and report on the interfacing with Matlab and Maple packages. <br /
- …