319 research outputs found

    On complexity and convergence of high-order coordinate descent algorithms

    Full text link
    Coordinate descent methods with high-order regularized models for box-constrained minimization are introduced. High-order stationarity asymptotic convergence and first-order stationarity worst-case evaluation complexity bounds are established. The computer work that is necessary for obtaining first-order ε\varepsilon-stationarity with respect to the variables of each coordinate-descent block is O(ε−(p+1)/p)O(\varepsilon^{-(p+1)/p}) whereas the computer work for getting first-order ε\varepsilon-stationarity with respect to all the variables simultaneously is O(ε−(p+1))O(\varepsilon^{-(p+1)}). Numerical examples involving multidimensional scaling problems are presented. The numerical performance of the methods is enhanced by means of coordinate-descent strategies for choosing initial points

    Line search multilevel optimization as computational methods for dense optical flow

    Get PDF
    We evaluate the performance of different optimization techniques developed in the context of optical flowcomputation with different variational models. In particular, based on truncated Newton methods (TN) that have been an effective approach for large-scale unconstrained optimization, we develop the use of efficient multilevel schemes for computing the optical flow. More precisely, we evaluate the performance of a standard unidirectional multilevel algorithm - called multiresolution optimization (MR/OPT), to a bidrectional multilevel algorithm - called full multigrid optimization (FMG/OPT). The FMG/OPT algorithm treats the coarse grid correction as an optimization search direction and eventually scales it using a line search. Experimental results on different image sequences using four models of optical flow computation show that the FMG/OPT algorithm outperforms both the TN and MR/OPT algorithms in terms of the computational work and the quality of the optical flow estimation

    A global optimization algorithm using trust-region methods and clever multistart

    Get PDF
    Global optimization is an important scientific domain, not only due to the algorithmic challenges associated with this area, but also due to its practical application in different areas of knowledge, from Biology to Aerospace Engineering. In this work we develop an algorithm based on trust-region methods for solving global optimization problems with derivatives, using a clever multistart strategy, testing its efficiency and effectiveness by comparison with other global optimization algorithms. Based on an idea applied to the resolution of problems in derivative-free optimization, this algorithm seeks to reduce the computational effort that the search for a global optimum requires, by comparing points that are relatively close to each other, using as comparison radius the one associated with the trust-region method, retaining only the most promising ones, which will continue to be explored. The proposed method has the added benefit of not only reporting the global optimum but also a list of local optima that may be of interest, depending on the context of the problem in question.A otimização global é um importante domínio científico, não só pelos desafios algorítmicos que lhe estão associados, mas pela sua aplicação prática em diferentes áreas do conhecimento, que vão desde a Biologia à Engenharia Aeroespacial. Neste trabalho é desenvolvido um algoritmo baseado em métodos de regiões de confiança, para problemas de otimização global com derivadas, usando uma estratégia de multi-inicializações inteligente, sendo testada a sua eficiência e eficácia por comparação com outros algoritmos de otimização global. Baseado numa ideia aplicada à resolução de problemas de otimização sem derivadas, este algoritmo procura reduzir o esforço computacional que a busca de ótimos globais requer, comparando pontos que se situam relativamente próximos usando como raio de comparação o raio associado ao método de região de confiança, e retendo apenas os mais promissores, que continuarão a ser explorados. O método proposto permite não só a obtenção do ótimo global mas também de uma lista de ótimos locais que podem ser de interesse, dependendo do contexto do problema em questão

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    (Global) Optimization: Historical notes and recent developments

    Get PDF
    Recent developments in (Global) Optimization are surveyed in this paper. We collected and commented quite a large number of recent references which, in our opinion, well represent the vivacity, deepness, and width of scope of current computational approaches and theoretical results about nonconvex optimization problems. Before the presentation of the recent developments, which are subdivided into two parts related to heuristic and exact approaches, respectively, we briefly sketch the origin of the discipline and observe what, from the initial attempts, survived, what was not considered at all as well as a few approaches which have been recently rediscovered, mostly in connection with machine learning

    Distributed Algorithms in Large-scaled Empirical Risk Minimization: Non-convexity, Adaptive-sampling, and Matrix-free Second-order Methods

    Get PDF
    The rising amount of data has changed the classical approaches in statistical modeling significantly. Special methods are designed for inferring meaningful relationships and hidden patterns from these large datasets, which build the foundation of a study called Machine Learning (ML). Such ML techniques have already applied widely in various areas and achieved compelling success. In the meantime, the huge amount of data also requires a deep revolution of current techniques, like the availability of advanced data storage, new efficient large-scale algorithms, and their distributed/parallelized implementation.There is a broad class of ML methods can be interpreted as Empirical Risk Minimization (ERM) problems. When utilizing various loss functions and likely necessary regularization terms, one could approach their specific ML goals by solving ERMs as separable finite sum optimization problems. There are circumstances where the nonconvex component is introduced into the ERMs which usually makes the problems hard to optimize. Especially, in recent years, neural networks, a popular branch of ML, draw numerous attention from the community. Neural networks are powerful and highly flexible inspired by the structured functionality of the brain. Typically, neural networks could be treated as large-scale and highly nonconvex ERMs.While as nonconvex ERMs become more complex and larger in scales, optimization using stochastic gradient descent (SGD) type methods proceeds slowly regarding its convergence rate and incapability of being distributed efficiently. It motivates researchers to explore more advanced local optimization methods such as approximate-Newton/second-order methods.In this dissertation, first-order stochastic optimization for the regularized ERMs in Chapter1 is studied. Based on the development of stochastic dual coordinate accent (SDCA) method, a dual free SDCA with non-uniform mini-batch sampling strategy is investigated [30, 29]. We also introduce several efficient algorithms for training ERMs, including neural networks, using second-order optimization methods in a distributed environment. In Chapter 2, we propose a practical distributed implementation for Newton-CG methods. It makes training neural networks by second-order methods doable in the distributed environment [28]. In Chapter 3, we further build steps towards using second-order methods to train feed-forward neural networks with negative curvature direction utilization and momentum acceleration. In this Chapter, we also report numerical experiments for comparing second-order methods and first-order methods regarding training neural networks. The following Chapter 4 purpose an distributed accumulative sample-size second-order methods for solving large-scale convex ERMs and nonconvex neural networks [35]. In Chapter 5, a python library named UCLibrary is briefly introduced for solving unconstrained optimization problems. This dissertation is all concluded in the last Chapter 6
    • …
    corecore