20,962 research outputs found
An efficiency study on obtaining the minimum weight of a thermal protection system
Three minimizing techniques are evaluated to determine the most efficient method for minimizing the weight of a thermal protection system and for reducing computer usage time. The methods used (numerical optimization and nonlinear least squares) for solving the minimum-weight problem involving more than one material and more than one constraint are discussed. In addition, the one material and one constraint problem is discussed
The geometry of nonlinear least squares with applications to sloppy models and optimization
Parameter estimation by nonlinear least squares minimization is a common
problem with an elegant geometric interpretation: the possible parameter values
of a model induce a manifold in the space of data predictions. The minimization
problem is then to find the point on the manifold closest to the data. We show
that the model manifolds of a large class of models, known as sloppy models,
have many universal features; they are characterized by a geometric series of
widths, extrinsic curvatures, and parameter-effects curvatures. A number of
common difficulties in optimizing least squares problems are due to this common
structure. First, algorithms tend to run into the boundaries of the model
manifold, causing parameters to diverge or become unphysical. We introduce the
model graph as an extension of the model manifold to remedy this problem. We
argue that appropriate priors can remove the boundaries and improve convergence
rates. We show that typical fits will have many evaporated parameters. Second,
bare model parameters are usually ill-suited to describing model behavior; cost
contours in parameter space tend to form hierarchies of plateaus and canyons.
Geometrically, we understand this inconvenient parametrization as an extremely
skewed coordinate basis and show that it induces a large parameter-effects
curvature on the manifold. Using coordinates based on geodesic motion, these
narrow canyons are transformed in many cases into a single quadratic, isotropic
basin. We interpret the modified Gauss-Newton and Levenberg-Marquardt fitting
algorithms as an Euler approximation to geodesic motion in these natural
coordinates on the model manifold and the model graph respectively. By adding a
geodesic acceleration adjustment to these algorithms, we alleviate the
difficulties from parameter-effects curvature, improving both efficiency and
success rates at finding good fits.Comment: 40 pages, 29 Figure
Functional principal components analysis via penalized rank one approximation
Two existing approaches to functional principal components analysis (FPCA)
are due to Rice and Silverman (1991) and Silverman (1996), both based on
maximizing variance but introducing penalization in different ways. In this
article we propose an alternative approach to FPCA using penalized rank one
approximation to the data matrix. Our contributions are four-fold: (1) by
considering invariance under scale transformation of the measurements, the new
formulation sheds light on how regularization should be performed for FPCA and
suggests an efficient power algorithm for computation; (2) it naturally
incorporates spline smoothing of discretized functional data; (3) the
connection with smoothing splines also facilitates construction of
cross-validation or generalized cross-validation criteria for smoothing
parameter selection that allows efficient computation; (4) different smoothing
parameters are permitted for different FPCs. The methodology is illustrated
with a real data example and a simulation.Comment: Published in at http://dx.doi.org/10.1214/08-EJS218 the Electronic
Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Global optimization of polynomials using gradient tentacles and sums of squares
In this work, the combine the theory of generalized critical values with the
theory of iterated rings of bounded elements (real holomorphy rings).
We consider the problem of computing the global infimum of a real polynomial
in several variables. Every global minimizer lies on the gradient variety. If
the polynomial attains a minimum, it is therefore equivalent to look for the
greatest lower bound on its gradient variety. Nie, Demmel and Sturmfels proved
recently a theorem about the existence of sums of squares certificates for such
lower bounds. Based on these certificates, they find arbitrarily tight
relaxations of the original problem that can be formulated as semidefinite
programs and thus be solved efficiently.
We deal here with the more general case when the polynomial is bounded from
belo w but does not necessarily attain a minimum. In this case, the method of
Nie, Demmel and Sturmfels might yield completely wrong results. In order to
overcome this problem, we replace the gradient variety by larger semialgebraic
sets which we call gradient tentacles. It now gets substantially harder to
prove the existence of the necessary sums of squares certificates.Comment: 22 page
A Generic Path Algorithm for Regularized Statistical Estimation
Regularization is widely used in statistics and machine learning to prevent
overfitting and gear solution towards prior information. In general, a
regularized estimation problem minimizes the sum of a loss function and a
penalty term. The penalty term is usually weighted by a tuning parameter and
encourages certain constraints on the parameters to be estimated. Particular
choices of constraints lead to the popular lasso, fused-lasso, and other
generalized penalized regression methods. Although there has been a lot
of research in this area, developing efficient optimization methods for many
nonseparable penalties remains a challenge. In this article we propose an exact
path solver based on ordinary differential equations (EPSODE) that works for
any convex loss function and can deal with generalized penalties as well
as more complicated regularization such as inequality constraints encountered
in shape-restricted regressions and nonparametric density estimation. In the
path following process, the solution path hits, exits, and slides along the
various constraints and vividly illustrates the tradeoffs between goodness of
fit and model parsimony. In practice, the EPSODE can be coupled with AIC, BIC,
or cross-validation to select an optimal tuning parameter. Our
applications to generalized regularized generalized linear models,
shape-restricted regressions, Gaussian graphical models, and nonparametric
density estimation showcase the potential of the EPSODE algorithm.Comment: 28 pages, 5 figure
- …