85 research outputs found
A Unifying Framework for Sparsity-Constrained Optimization
In this paper, we consider the optimization problem of minimizing a continuously differentiable function subject to both convex constraints and sparsity constraints. By exploiting a mixed-integer reformulation from the literature, we define a necessary optimality condition based on a tailored neighborhood that allows to take into account potential changes of the support set. We then propose an algorithmic framework to tackle the considered class of problems and prove its convergence to points satisfying the newly introduced concept of stationarity. We further show that, by suitably choosing the neighborhood, other well-known optimality conditions from the literature can be recovered at the limit points of the sequence produced by the algorithm. Finally, we analyze the computational impact of the neighborhood size within our framework and in the comparison with some state-of-the-art algorithms, namely, the Penalty Decomposition method and the Greedy Sparse-Simplex method. The algorithms have been tested using a benchmark related to sparse logistic regression problems
Nonlinear optimization and support vector machines
Support Vector Machine (SVM) is one of the most important class of machine learning models and algorithms, and has been successfully applied in various fields. Nonlinear optimization plays a crucial role in SVM methodology, both in defining the machine learning models and in designing convergent and efficient algorithms for large-scale training problems. In this paper we present the convex programming problems underlying SVM focusing on supervised binary classification. We analyze the most important and used optimization methods for SVM training problems, and we discuss how the properties of these problems can be incorporated in designing useful algorithms
Nonmonotone globalization of inexact difference/ Newton-iterative methods for nonlinear equations
Accettato in forma rivista e ridotta su Optimization Methods and Softwar
A convergent and fast path equilibration algorithm for the traffic assignment problem
In this work, we present an algorithm for the traffic assignment problem formulated as convex minimization problem whose variables are the path flows. The algorithm is a path equilibration algorithm, i.e. an algorithm where at each iteration only two variables are updated by means of an inexact line search along a feasible and descent direction having only two nonzero elements. The main features of the algorithm are the adoption of an initial tentative stepsize based on second-order information and of a suitable strategy (using an adaptive column generation technique) to select the variables to be updated. The algorithm is an inexact Gauss–Seidel decomposition method whose convergence follows from results recently stated by the authors requiring that the column generation procedure is applied within a prefixed number of iterations. The results of an extensive computational experience performed on small, medium and large networks show the effectiveness of the method compared with state-of-the-art algorithms
Nonmonotone globalization techniques for the Barzilai-Borwein gradient method
In this paper we propose new globalization strategies for the Barzilai and Borwein gradient method, based on suitable relaxations of the monotonicity requirements. In particular, we define a class of algorithms that combine nonmonotone watchdog techniques with nonmonotone linesearch rules and we prove the global convergence of these schemes. Then we perform an extensive computational study, which shows the effectiveness of the proposed approach in the solution of large dimensional unconstrained optimization problems
Use of the minimum-norm search direction in a nonmonotone version of the Gauss-Newton method
In this work, a new stabilization scheme for the Gauss-Newton method is defined, where the minimum norm solution of the linear least-squares problem is normally taken as search direction and the standard Gauss-Newton equation is suitably modified only at a subsequence of the iterates. Moreover, the stepsize is computed by means of a nonmonotone line search technique. The global convergence of the proposed algorithm model is proved under standard assumptions and the superlinear rate of convergence is ensured for the zero-residual case. A specific implementation algorithm is described, where the use of the pure Gauss-Newton iteration is conditioned to the progress made in the minimization process by controlling the stepsize. The results of a computational experimentation performed on a set of standard test problems are reported
- …