126 research outputs found

    Approximate norm descent methods for constrained nonlinear systems

    Get PDF

    Acceleration and new analysis of convex optimization algorithms

    Full text link
    Ces dernières années ont vu une résurgence de l’algorithme de Frank-Wolfe (FW) (également connu sous le nom de méthodes de gradient conditionnel) dans l’optimisation clairsemée et les problèmes d’apprentissage automatique à grande échelle avec des objectifs convexes lisses. Par rapport aux méthodes de gradient projeté ou proximal, une telle méthode sans projection permet d’économiser le coût de calcul des projections orthogonales sur l’ensemble de contraintes. Parallèlement, FW propose également des solutions à structure clairsemée. Malgré ces propriétés prometteuses, FW ne bénéficie pas des taux de convergence optimaux obtenus par les méthodes accélérées basées sur la projection. Nous menons une enquête dé- taillée sur les essais récents pour accélérer FW dans différents contextes et soulignons où se situe la difficulté lorsque l’on vise des taux linéaires globaux en théorie. En outre, nous fournissons une direction prometteuse pour accélérer FW sur des ensembles fortement convexes en utilisant des techniques d’intervalle de dualité et une nouvelle notion de régularité. D’autre part, l’algorithme FW est une covariante affine et bénéficie de taux de convergence accélérés lorsque l’ensemble de contraintes est fortement convexe. Cependant, ces résultats reposent sur des hypothèses dépendantes de la norme, entraînant généralement des bornes invariantes non affines, en contradiction avec la propriété de covariante affine de FW. Dans ce travail, nous introduisons de nouvelles hypothèses structurelles sur le problème (comme la régularité directionnelle) et dérivons une analyse affine invariante et indépendante de la norme de Frank-Wolfe. Sur la base de notre analyse, nous proposons une recherche par ligne affine invariante. Fait intéressant, nous montrons que les recherches en ligne classiques utilisant la régularité de la fonction objectif convergent étonnamment vers une taille de pas invariante affine, malgré l’utilisation de normes dépendantes de l’affine dans le calcul des tailles de pas. Cela indique que nous n’avons pas nécessairement besoin de connaître à l’avance la structure des ensembles pour profiter du taux accéléré affine-invariant. Dans un autre axe de recherche, nous étudions les algorithmes au-delà des méthodes du premier ordre. Les techniques Quasi-Newton approchent le pas de Newton en estimant le Hessien en utilisant les équations dites sécantes. Certaines de ces méthodes calculent le Hessien en utilisant plusieurs équations sécantes mais produisent des mises à jour non symétriques. D’autres schémas quasi-Newton, tels que BFGS, imposent la symétrie mais ne peuvent pas satisfaire plus d’une équation sécante. Nous proposons un nouveau type de mise à jour symétrique quasi-Newton utilisant plusieurs équations sécantes au sens des moindres carrés. Notre approche généralise et unifie la conception de mises à jour quasi-Newton et satisfait des garanties de robustesse prouvables.Recent years have witnessed a resurgence of the Frank-Wolfe (FW) algorithm, also known as conditional gradient methods, in sparse optimization and large-scale machine learning problems with smooth convex objectives. Compared to projected or proximal gradient methods, such projection-free method saves the computational cost of orthogonal projections onto the constraint set. Meanwhile, FW also gives solutions with sparse structure. Despite of these promising properties, FW does not enjoy the optimal convergence rates achieved by projection-based accelerated methods. On the other hand, FW algorithm is affine-covariant, and enjoys accelerated convergence rates when the constraint set is strongly convex. However, these results rely on norm-dependent assumptions, usually incurring non-affine invariant bounds, in contradiction with FW’s affine-covariant property. In this work, we introduce new structural assumptions on the problem (such as the directional smoothness) and derive an affine in- variant, norm-independent analysis of Frank-Wolfe. Based on our analysis, we pro- pose an affine invariant backtracking line-search. Interestingly, we show that typical back-tracking line-search techniques using smoothness of the objective function surprisingly converge to an affine invariant stepsize, despite using affine-dependent norms in the computation of stepsizes. This indicates that we do not necessarily need to know the structure of sets in advance to enjoy the affine-invariant accelerated rate. Additionally, we provide a promising direction to accelerate FW over strongly convex sets using duality gap techniques and a new version of smoothness. In another line of research, we study algorithms beyond first-order methods. Quasi-Newton techniques approximate the Newton step by estimating the Hessian using the so-called secant equations. Some of these methods compute the Hessian using several secant equations but produce non-symmetric updates. Other quasi- Newton schemes, such as BFGS, enforce symmetry but cannot satisfy more than one secant equation. We propose a new type of quasi-Newton symmetric update using several secant equations in a least-squares sense. Our approach generalizes and unifies the design of quasi-Newton updates and satisfies provable robustness guarantees

    Two Affine Scaling Methods for Solving Optimization Problems Regularized with an L1-norm

    Get PDF
    In finance, the implied volatility surface is plotted against strike price and time to maturity. The shape of this volatility surface can be identified by fitting the model to what is actually observed in the market. The metric that is used to measure the discrepancy between the model and the market is usually defined by a mean squares of error of the model prices to the market prices. A regularization term can be added to this error metric to make the solution possess some desired properties. The discrepancy that we want to minimize is usually a highly nonlinear function of a set of model parameters with the regularization term. Typically monotonic decreasing algorithm is adopted to solve this minimization problem. Steepest descent or Newton type algorithms are two iterative methods but they are local, i.e., they use derivative information around the current iterate to find the next iterate. In order to ensure convergence, line search and trust region methods are two widely used globalization techniques. Motivated by the simplicity of Barzilai-Borwein method and the convergence properties brought by globalization techniques, we propose a new Scaled Gradient (SG) method for minimizing a differentiable function plus an L1-norm. This non-monotone iterative method only requires gradient information and safeguarded Barzilai-Borwein steplength is used in each iteration. An adaptive line search with the Armijo-type condition check is performed in each iteration to ensure convergence. Coleman, Li and Wang proposed another trust region approach in solving the same problem. We give a theoretical proof of the convergence of their algorithm. The objective of this thesis is to numerically investigate the performance of the SG method and establish global and local convergence properties of Coleman, Li and Wang’s trust region method proposed in [26]. Some future research directions are also given at the end of this thesis

    Stable Local Volatility Calibration Using Kernel Splines

    Get PDF
    This thesis proposes an optimization formulation to ensure accuracy and stability in the local volatility function calibration. The unknown local volatility function is represented by kernel splines. The proposed optimization formulation minimizes calibration error and an L1 norm of the vector of coefficients for the kernel splines. The L1 norm regularization forces some coefficients to be zero at the termination of optimization. The complexity of local volatility function model is determined by the number of nonzero coefficients. Thus by using a regularization parameter, the proposed formulation balances the calibration accuracy with the model complexity. In the context of the support vector regression for function based on finite observations, this corresponds to balance the generalization error with the number of support vectors. In this thesis we also propose a trust region method to determine the coefficient vector in the proposed optimization formulation. In this algorithm, the main computation of each iteration is reduced to solving a standard trust region subproblem. To deal with the non-differentiable L1 norm in the formulation, a line search technique which allows crossing nondifferentiable hyperplanes is introduced to find the minimum objective value along a direction within a trust region. With the trust region algorithm, we numerically illustrate the ability of proposed approach to reconstruct the local volatility in a synthetic local volatility market. Based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is smooth and resembles in shape the observed implied volatility surface. Stability is illustrated by considering calibration using market option data from nearby dates

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
    • …
    corecore