39 research outputs found

    A Class of Mathematical Programs with Equilibrium Constraints: A Smooth Algorithm and Applications to Contact Problems

    Get PDF
    We discuss a special mathematical programming problem with equilibrium constraints (MPEC), that arises in material and shape optimization problems involving the contact of a rod or a plate with a rigid obstacle. This MPEC can be reduced to a nonlinear programming problem with independent variables and some dependent variables implicity defined by the solution of a mixed linear complementarity problem (MLCP). A projected-gradient algorithm including a complementarity method is proposed to solve this optimization problem. Several numerical examples are reported to illustrate the efficiency of this methodology in practice

    Computational methods for Cahn-Hilliard variational inequalities

    Get PDF
    We consider the non-standard fourth order parabolic Cahn-Hilliard variational inequality with constant as well as non-constant diffusional mobility. We propose a primal-dual active set method as solution technique for the discrete variational inequality given by a (semi-)implicit Euler discretization in time and linear finite elements in space. We show local convergence of the method by reinterpretation as a semi-smooth Newton method. The discrete saddle point system arising in each iteration step is handled by either a Gauss-Seidel type method, the application of a multi-frontal direct solver or a preconditioned conjugate gradient method applied to the Schur complement. Finally we show the efficiency of the method and the preconditioning with several numerical simulations

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Using Interior Point Methods for Large-scale Support Vector Machine training

    Get PDF
    Support Vector Machines (SVMs) are powerful machine learning techniques for classification and regression, but the training stage involves a convex quadratic optimization program that is most often computationally expensive. Traditionally, active-set methods have been used rather than interior point methods, due to the Hessian in the standard dual formulation being completely dense. But as active-set methods are essentially sequential, they may not be adequate for machine learning challenges of the future. Additionally, training time may be limited, or data may grow so large that cluster-computing approaches need to be considered. Interior point methods have the potential to answer these concerns directly. They scale efficiently, they can provide good early approximations, and they are suitable for parallel and multi-core environments. To apply them to SVM training, it is necessary to address directly the most computationally expensive aspect of the algorithm. We therefore present an exact reformulation of the standard linear SVM training optimization problem that exploits separability of terms in the objective. By so doing, per-iteration computational complexity is reduced from O(n3) to O(n). We show how this reformulation can be applied to many machine learning problems in the SVM family. Implementation issues relating to specializing the algorithm are explored through extensive numerical experiments. They show that the performance of our algorithm for large dense or noisy data sets is consistent and highly competitive, and in some cases can out perform all other approaches by a large margin. Unlike active set methods, performance is largely unaffected by noisy data. We also show how, by exploiting the block structure of the augmented system matrix, a hybrid MPI/Open MP implementation of the algorithm enables data and linear algebra computations to be efficiently partitioned amongst parallel processing nodes in a clustered computing environment. The applicability of our technique is extended to nonlinear SVMs by low-rank approximation of the kernel matrix. We develop a heuristic designed to represent clusters using a small number of features. Additionally, an early approximation scheme reduces the number of samples that need to be considered. Both elements improve the computational efficiency of the training phase. Taken as a whole, this thesis shows that with suitable problem formulation and efficient implementation techniques, interior point methods are a viable optimization technology to apply to large-scale SVM training, and are able to provide state-of-the-art performance

    An exact approach for aggregated formulations

    Get PDF

    Variational and Time-Distributed Methods for Real-time Model Predictive Control

    Full text link
    This dissertation concerns the theoretical, algorithmic, and practical aspects of solving optimal control problems (OCPs) in real-time. The topic is motivated by Model Predictive Control (MPC), a powerful control technique for constrained, nonlinear systems that computes control actions by solving a parameterized OCP at each sampling instant. To successfully implement MPC, these parameterized OCPs need to be solved in real-time. This is a significant challenge for systems with fast dynamics and/or limited onboard computing power and is often the largest barrier to the deployment of MPC controllers. The contributions of this dissertation are as follows. First, I present a system theoretic analysis of Time-distributed Optimization (TDO) in Model Predictive Control. When implemented using TDO, an MPC controller distributed optimization iterates over time by maintaining a running solution estimate for the optimal control problem and updating it at each sampling instant. The resulting controller can be viewed as a dynamic compensator which is placed in closed-loop with the plant. The resulting coupled plant-optimizer system is analyzed using input-to-state stability concepts and sufficient conditions for stability and constraint satisfaction are derived. When applied to time distributed sequential quadratic programming, the framework significantly extends the existing theoretical analysis for the real-time iteration scheme. Numerical simulations are presented that demonstrate the effectiveness of the scheme. Second, I present the Proximally Stabilized Fischer-Burmeister (FBstab) algorithm for convex quadratic programming. FBstab is a novel algorithm that synergistically combines the proximal point algorithm with a primal-dual semismooth Newton-type method. FBstab is numerically robust, easy to warmstart, handles degenerate primal-dual solutions, detects infeasibility/unboundedness and requires only that the Hessian matrix be positive semidefinite. The chapter outlines the algorithm, provides convergence and convergence rate proofs, and reports some numerical results from model predictive control benchmarks and from the Maros-Meszaros test set. Overall, FBstab shown to be is competitive with state of the art methods and to be especially promising for model predictive control and other parameterized problems. Finally, I present an experimental application of some of the approaches from the first two chapters: Emissions oriented supervisory model predictive control (SMPC) of a diesel engine. The control objective is to reduce engine-out cumulative NOx and total hydrocarbon (THC) emissions. This is accomplished using an MPC controller which minimizes deviation from optimal setpoints, subject to combustion quality constraints, by coordinating the fuel input and the EGR rate target provided to an inner-loop airpath controller. The SMPC controller is implemented using TDO and a variant of FBstab which allows us to achieve sub-millisecond controller execution times. We experimentally demonstrate 10-15% cumulative emissions reductions over the Worldwide Harmonized Light Vehicles Test Cycle (WLTC) drivecycle.PHDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155167/1/dliaomcp_1.pd

    Um algoritmo de filtro globalmente convergente sem derivadas da função objetivo para otimização restrita e algoritmos de pivotamento em blocos principais para problemas de complementaridade linear

    Get PDF
    Orientadora : Profª. Drª. Elizabeth W. KarasCo-orientadora : Profª. Drª. Mael SachineOrientador no exterior : Profª. Drª. Joaquim J. JúdiceTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Matemática. Defesa: Curitiba, 25/02/2016Inclui referências : f. 133-144Resumo: Este trabalho engloba dois temas diferentes. Inicialmente, apresentamos um algoritmo para resolver problemas de otimizacao restrita que não faz uso das derivadas da funcao objetivo. O algoritmo mescla conceitos de restauração inexata com técnicas de filtro. Cada interação é decomposta em duas fases: uma fase de viabilidade e uma fase de otimalidade, as quais visam reduzir os valores da medida de inviabilidade e da funcao objetivo, respectivamente. A fase de otimalidade é computada por interações internas de região de confiança sem derivadas, sendo que seus modelos podem ser construídos por qualquer técnica, contanto que sejam aproximaçoes razoável para a função objetivo em torno do ponto corrente. Assumindo esta, e hipóteses clássicas, provamos que o algoritmo satisfaz certa condição de eficiência, a qual implica sua convergência global. Para a análise prática, são apresentados alguns resultados numéricos. O segundo tema refere-se a problemas de complementaridade linear. Nesta parte são discutidos alguns algoritmos de pivotamento em blocos principais, eficientes para solucionar este tipo de problema. Uma análise sobre algumas técnicas para garantia de convergência desses algoritmos _e realizada. Apresentamos alguns resultados numéricos para comparar a eficiencia e a robustez dos algoritmos discutidos. Além disso, são apresentadas duas aplicações para o método de pivotamento em blocos principais: decomposição em matrizes não negativas e métodos de gradiente projetados precondicionado. Para finalizar, nesta segunda aplicação, sugerimos uma matriz de precondicionamento.Abstract: This work covers two diferent subjects. First we present an algorithm for solving constrained optimization problems that does not make explicit use of the objective function derivatives. The algorithm mixes an inexact restoration framework with filter techniques. Each iteration is decomposed in two phases: a feasibility phase that reduces an infeasibility measure; and an optimality phase that reduces the objective function value. The optimality step is computed by derivative-free trust-region internal iterations, where the models can be constructed by any technique, provided that they are reasonable approximations of the objective function around the current point. Assuming that this and classical hypotheses hold, we prove that the algorithm satisfes an eficiency condition, which provides its global convergence. Preliminar numerical results are presented. In the second subject, we discuss the linear complementarity problem. Some block principal pivoting algorithms, eficient for solving this kind of problem, are discussed. An analysis of some techniques to guarantee convergence results of these algorithms is made. We present some numerical results to compare the eficiency and the robustness of the algorithms. Moreover we discuss two applications of the block principal pivoting: nonnegative matrix factorization and preconditioned projected gradient methods. Furthermore, in this second application, we suggest a preconditioning matrix
    corecore