365 research outputs found
Active-set strategy in Powell's method for optimization without derivatives
In this article we present an algorithm for solving bound constrained optimization problems without derivatives based on Powell's method for derivative-free optimization. First we consider the unconstrained optimization problem. At each iteration a quadratic interpolation model of the objective function is constructed around the current iterate and this model is minimized to obtain a new trial point. The whole process is embedded within a trust-region framework. Our algorithm uses infinity norm instead of the Euclidean norm and we solve a box constrained quadratic subproblem using an active-set strategy to explore faces of the box. Therefore, a bound constrained optimization algorithm is easily extended. We compare our implementation with NEWUOA and BOBYQA, Powell's algorithms for unconstrained and bound constrained derivative free optimization respectively. Numerical experiments show that, in general, our algorithm require less functional evaluations than Powell's algorithms. Mathematical subject classification: Primary: 06B10; Secondary: 06D05.Facultad de Ciencias Exacta
Active-set strategy in Powell's method for optimization without derivatives
In this article we present an algorithm for solving bound constrained optimization problems without derivatives based on Powell's method for derivative-free optimization. First we consider the unconstrained optimization problem. At each iteration a quadratic interpolation model of the objective function is constructed around the current iterate and this model is minimized to obtain a new trial point. The whole process is embedded within a trust-region framework. Our algorithm uses infinity norm instead of the Euclidean norm and we solve a box constrained quadratic subproblem using an active-set strategy to explore faces of the box. Therefore, a bound constrained optimization algorithm is easily extended. We compare our implementation with NEWUOA and BOBYQA, Powell's algorithms for unconstrained and bound constrained derivative free optimization respectively. Numerical experiments show that, in general, our algorithm require less functional evaluations than Powell's algorithms. Mathematical subject classification: Primary: 06B10; Secondary: 06D05.Facultad de Ciencias Exacta
GENO -- GENeric Optimization for Classical Machine Learning
Although optimization is the longstanding algorithmic backbone of machine
learning, new models still require the time-consuming implementation of new
solvers. As a result, there are thousands of implementations of optimization
algorithms for machine learning problems. A natural question is, if it is
always necessary to implement a new solver, or if there is one algorithm that
is sufficient for most models. Common belief suggests that such a
one-algorithm-fits-all approach cannot work, because this algorithm cannot
exploit model specific structure and thus cannot be efficient and robust on a
wide variety of problems. Here, we challenge this common belief. We have
designed and implemented the optimization framework GENO (GENeric Optimization)
that combines a modeling language with a generic solver. GENO generates a
solver from the declarative specification of an optimization problem class. The
framework is flexible enough to encompass most of the classical machine
learning problems. We show on a wide variety of classical but also some
recently suggested problems that the automatically generated solvers are (1) as
efficient as well-engineered specialized solvers, (2) more efficient by a
decent margin than recent state-of-the-art solvers, and (3) orders of magnitude
more efficient than classical modeling language plus solver approaches
Global Optimization by Particle Swarm Method:A Fortran Program
Programs that work very well in optimizing convex functions very often perform poorly when the problem has multiple local minima or maxima. They are often caught or trapped in the local minima/maxima. Several methods have been developed to escape from being caught in such local optima. The Particle Swarm Method of global optimization is one of such methods. A swarm of birds or insects or a school of fish searches for food, protection, etc. in a very typical manner. If one of the members of the swarm sees a desirable path to go, the rest of the swarm will follow quickly. Every member of the swarm searches for the best in its locality - learns from its own experience. Additionally, each member learns from the others, typically from the best performer among them. Even human beings show a tendency to learn from their own experience, their immediate neighbours and the ideal performers. The Particle Swarm method of optimization mimics this behaviour. Every individual of the swarm is considered as a particle in a multidimensional space that has a position and a velocity. These particles fly through hyperspace and remember the best position that they have seen. Members of a swarm communicate good positions to each other and adjust their own position and velocity based on these good positions. The Particle Swarm method of optimization testifies the success of bounded rationality and decentralized decisionmaking in reaching at the global optima. It has been used successfully to optimize extremely difficult multimodal functions. Here we give a FORTRAN program to find the global optimum by the Repulsive Particle Swarm method. The program has been tested on over 90 benchmark functions of varied dimensions, complexities and difficulty levels.Bounded rationality; Decentralized decision making; Jacobian; Elliptic functions; Gielis super-formula; supershapes; Repulsive Particle Swarm method of Global optimization; nonlinear programming; multiple sub-optimum; global; local optima; fit; data; empirical; estimation; parameters; curve fitting
Active-set strategy in Powell's method for optimization without derivatives
In this article we present an algorithm for solving bound constrained optimization problems without derivatives based on Powell's method for derivative-free optimization. First we consider the unconstrained optimization problem. At each iteration a quadratic interpolation model of the objective function is constructed around the current iterate and this model is minimized to obtain a new trial point. The whole process is embedded within a trust-region framework. Our algorithm uses infinity norm instead of the Euclidean norm and we solve a box constrained quadratic subproblem using an active-set strategy to explore faces of the box. Therefore, a bound constrained optimization algorithm is easily extended. We compare our implementation with NEWUOA and BOBYQA, Powell's algorithms for unconstrained and bound constrained derivative free optimization respectively. Numerical experiments show that, in general, our algorithm require less functional evaluations than Powell's algorithms. Mathematical subject classification: Primary: 06B10; Secondary: 06D05.Facultad de Ciencias Exacta
Nonmonotone globalization of the finite-difference Newton-GMRES method for nonlinear equations.
In this paper, we study nonmonotone globalization strategies, in connection with the finite-difference inexact Newton-GMRES method for nonlinear equations. We first define a globalization algorithm that combines nonmonotone watchdog rules and nonmonotone derivative-free linesearches related to a merit function, and prove its global convergence under the assumption that the Jacobian is nonsingular and that the iterations of the GMRES subspace method can be completed at each step. Then we introduce a hybrid stabilization scheme employing occasional line searches along positive bases, and establish global convergence towards a solution of the system, under the less demanding condition that the Jacobian is nonsingular at stationary points of the merit function. Through a set of numerical examples, we show that the proposed techniques may constitute useful options to be added in solvers for nonlinear systems of equations. Š 2010 Taylor & Francis
- âŚ