24,263 research outputs found

    Conic convex programming and self-dual embedding

    Get PDF
    How to initialize an algorithm to solve an optimization problem is of great theoretical and practical importance. In the simplex method for linear programming this issue is resolved by either the two-phase approach or using the so-called big M technique. In the interior point method, there is a more elegant way to deal with the initialization problem, viz. the self-dual embedding technique proposed by Ye, Todd and Mizuno (30). For linear programming this technique makes it possible to identify an optimal solution or conclude the problem to be infeasible/unbounded by solving its embedded self-dual problem. The embedded self-dual problem has a trivial initial solution and has the same structure as the original problem. Hence, it eliminates the need to consider the initialization problem at all. In this paper, we extend this approach to solve general conic convex programming, including semidefinite programming. Since a nonlinear conic convex programming problem may lack the so-called strict complementarity property, it causes difficulties in identifying solutions for the original problem, based on solutions for the embedded self-dual system. We provide numerous examples from semidefinite programming to illustrate various possibilities which have no analogue in the linear programming case

    Study on theCone Programming Reformulation of Active Noise Control Filter Design in the Frequency Domain

    Get PDF
    In the practice of active noise control, the control filter can be designed in either the time domain or the frequency domain. Compared with the former method category, it is more convenient to use frequency-domain methods to apply constraints such as stability, robustness, disturbance enhancement, input limit of loudspeakers, etc. Better noise reduction performance can usually be achieved by frequency-domain design as well. However, the computational complexity of designing a filter in the frequency domain is usually significant, especially for multichannel systems with multiple constraints. This is one of the challenges of using frequency-domain design in practical applications. In this paper, the traditional optimization problem used in frequency-domain filter design was modified and reformulated to a cone programming problem, where the inequality constraints were reformulated as second-order cones and positive semidefinite cones. Because of its convex nature, the global minimum solution to the problem can always be found. Another advantage of this cone programming reformulation is that algorithms with high computational efficiency can be used. It was demonstrated that, compared with using the traditional sequential quadratic programming method, the calculation is more efficient if the filter design problem is reformulated to cone programming and solved by the primal-dual interior-point method

    Projection methods in conic optimization

    Get PDF
    There exist efficient algorithms to project a point onto the intersection of a convex cone and an affine subspace. Those conic projections are in turn the work-horse of a range of algorithms in conic optimization, having a variety of applications in science, finance and engineering. This chapter reviews some of these algorithms, emphasizing the so-called regularization algorithms for linear conic optimization, and applications in polynomial optimization. This is a presentation of the material of several recent research articles; we aim here at clarifying the ideas, presenting them in a general framework, and pointing out important techniques

    Conic Optimization Theory: Convexification Techniques and Numerical Algorithms

    Full text link
    Optimization is at the core of control theory and appears in several areas of this field, such as optimal control, distributed control, system identification, robust control, state estimation, model predictive control and dynamic programming. The recent advances in various topics of modern optimization have also been revamping the area of machine learning. Motivated by the crucial role of optimization theory in the design, analysis, control and operation of real-world systems, this tutorial paper offers a detailed overview of some major advances in this area, namely conic optimization and its emerging applications. First, we discuss the importance of conic optimization in different areas. Then, we explain seminal results on the design of hierarchies of convex relaxations for a wide range of nonconvex problems. Finally, we study different numerical algorithms for large-scale conic optimization problems.Comment: 18 page

    Convergence Analysis of an Inexact Feasible Interior Point Method for Convex Quadratic Programming

    Get PDF
    In this paper we will discuss two variants of an inexact feasible interior point algorithm for convex quadratic programming. We will consider two different neighbourhoods: a (small) one induced by the use of the Euclidean norm which yields a short-step algorithm and a symmetric one induced by the use of the infinity norm which yields a (practical) long-step algorithm. Both algorithms allow for the Newton equation system to be solved inexactly. For both algorithms we will provide conditions for the level of error acceptable in the Newton equation and establish the worst-case complexity results

    Bundle-based pruning in the max-plus curse of dimensionality free method

    Full text link
    Recently a new class of techniques termed the max-plus curse of dimensionality-free methods have been developed to solve nonlinear optimal control problems. In these methods the discretization in state space is avoided by using a max-plus basis expansion of the value function. This requires storing only the coefficients of the basis functions used for representation. However, the number of basis functions grows exponentially with respect to the number of time steps of propagation to the time horizon of the control problem. This so called "curse of complexity" can be managed by applying a pruning procedure which selects the subset of basis functions that contribute most to the approximation of the value function. The pruning procedures described thus far in the literature rely on the solution of a sequence of high dimensional optimization problems which can become computationally expensive. In this paper we show that if the max-plus basis functions are linear and the region of interest in state space is convex, the pruning problem can be efficiently solved by the bundle method. This approach combining the bundle method and semidefinite formulations is applied to the quantum gate synthesis problem, in which the state space is the special unitary group (which is non-convex). This is based on the observation that the convexification of the unitary group leads to an exact relaxation. The results are studied and validated via examples
    corecore