97 research outputs found

    New predictor-corrector interior-point algorithm for symmetric cone horizontal linear complementarity problems

    Get PDF
    In this paper we propose a new predictor-corrector interior-point algorithm for solving P_* (Îș) horizontal linear complementarity problems defined on a Cartesian product of symmetric cones, which is not based on a usual barrier function. We generalize the predictor-corrector algorithm introduced in [13] to P_* (Îș)-linear horizontal complementarity problems on a Cartesian product of symmetric cones. We apply the algebraic equivalent transformation technique proposed by Darvay [9] and we use the function φ(t)=t-√t in order to determine the new search directions. In each iteration the proposed algorithm performs one predictor and one corrector step. We prove that the predictor-corrector interior-point algorithm has the same complexity bound as the best known interior-point algorithms for solving these types of problems. Furthermore, we provide a condition related to the proximity and update parameters for which the introduced predictor-corrector algorithm is well defined

    New Predictor-Corrector Algorithm for Symmetric Cone Horizontal Linear Complementarity Problems

    Get PDF
    We propose a new predictor-corrector interior-point algorithm for solving Cartesian symmetric cone horizontal linear complementarity problems, which is not based on a usual barrier function. We generalize the predictor-corrector algorithm introduced in Darvay et al. (SIAM J Optim 30:2628-2658, 2020) to horizontal linear complementarity problems on a Cartesian product of symmetric cones. We apply the algebraically equivalent transformation technique proposed by Darvay (Adv Model Optim 5:51-92, 2003), and we use the difference of the identity and the square root function to determine the new search directions. In each iteration, the proposed algorithm performs one predictor and one corrector step. We prove that the predictor-corrector interior-point algorithm has the same complexity bound as the best known interior-point methods for solving these types of problems. Furthermore, we provide a condition related to the proximity and update parameters for which the introduced predictor-corrector algorithm is well defined

    An infeasible interior-point method for the P∗P_*-matrix linear complementarity‎ ‎problem based on a trigonometric kernel function with full-Newton‎ ‎step

    Get PDF
    An infeasible interior-point algorithm for solving the‎ ‎P∗P_*-matrix linear complementarity problem based on a kernel‎ ‎function with trigonometric barrier term is analyzed‎. ‎Each (main)‎ ‎iteration of the algorithm consists of a feasibility step and‎ ‎several centrality steps‎, ‎whose feasibility step is induced by a‎ ‎trigonometric kernel function‎. ‎The complexity result coincides with‎ ‎the best result for infeasible interior-point methods for‎ ‎P∗P_*-matrix linear complementarity problem

    Dual versus Primal-Dual Interior-Point Methods for Linear and Conic Programming

    Full text link
    Dual versus Primal-Dual Interior-Point Methods for Linear and Conic Programmin

    Predictor-corrector interior-point algorithm based on a new search direction working in a wide neighbourhood of the central path

    Get PDF
    We introduce a new predictor-corrector interior-point algorithm for solving P_*(Îș)-linear complementarity problems which works in a wide neighbourhood of the central path. We use the technique of algebraic equivalent transformation of the centering equations of the central path system. In this technique, we apply the function φ(t)=√t in order to obtain the new search directions. We define the new wide neighbourhood D_φ. In this way, we obtain the first interior-point algorithm, where not only the central path system is transformed, but the definition of the neighbourhood is also modified taking into consideration the algebraic equivalent transformation technique. This gives a new direction in the research of interior-point methods. We prove that the IPA has O((1+Îș)n log⁥((〖〖(x〗^0)〗^T s^0)/Ï”) ) iteration complexity. Furtermore, we show the efficiency of the proposed predictor-corrector interior-point method by providing numerical results. Up to our best knowledge, this is the first predictor-corrector interior-point algorithm which works in the D_φ neighbourhood using φ(t)=√t

    An inexact interior-point algorithm for conic convex optimization problems

    Get PDF
    In this dissertation we study an algorithm for convex optimization problems in conic form. (Without loss of generality, any convex problem can be written in conic form.) Our algorithm belongs to the class of interior-point methods (IPMs), which have been associated with many recent theoretical and algorithmic advances in mathematical optimization. In an IPM one solves a family of slowly-varying optimization problems that converge in some sense to the original optimization problem. Each problem in the family depends on a so-called barrier function that is associated with the problem data. Typically IPMs require evaluation of the gradient and Hessian of a suitable (``self-concordant'') barrier function. In some cases such evaluation is expensive; in other cases formulas in closed form for a suitable barrier function and its derivatives are unknown. We show that even if the gradient and Hessian of a suitable barrier function are computed inexactly, the resulting IPM can possess the desirable properties of polynomial iteration complexity and global convergence to the optimal solution set. In practice the best IPMs are primal-dual methods, in which a convex problem is solved together with its dual, which is another convex problem. One downside of existing primal-dual methods is their need for evaluation of a suitable barrier function, or its derivatives, for the dual problem. Such evaluation can be even more difficult than that required for the barrier function associated with the original problem. Our primal-dual IPM does not suffer from this drawback---it does not require exact evaluation, or even estimates, of a suitable barrier function for the dual problem. Given any convex optimization problem, Nesterov and Nemirovski showed that there exists a suitable barrier function, which they called the universal barrier function. Since this function and its derivatives may not be available in closed form, we explain how a Monte Carlo method can be used to estimate the derivatives. We make probabilistic statements regarding the errors in these estimates, and give an upper bound on the minimum Monte Carlo sample size required to ensure that with high probability, our primal-dual IPM possesses polynomial iteration complexity and global convergence

    Smoothing proximal gradient method for general structured sparse regression

    Full text link
    We study the problem of estimating high-dimensional regression models regularized by a structured sparsity-inducing penalty that encodes prior structural information on either the input or output variables. We consider two widely adopted types of penalties of this kind as motivating examples: (1) the general overlapping-group-lasso penalty, generalized from the group-lasso penalty; and (2) the graph-guided-fused-lasso penalty, generalized from the fused-lasso penalty. For both types of penalties, due to their nonseparability and nonsmoothness, developing an efficient optimization method remains a challenging problem. In this paper we propose a general optimization approach, the smoothing proximal gradient (SPG) method, which can solve structured sparse regression problems with any smooth convex loss under a wide spectrum of structured sparsity-inducing penalties. Our approach combines a smoothing technique with an effective proximal gradient method. It achieves a convergence rate significantly faster than the standard first-order methods, subgradient methods, and is much more scalable than the most widely used interior-point methods. The efficiency and scalability of our method are demonstrated on both simulation experiments and real genetic data sets.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS514 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Convex Optimization Methods for Dimension Reduction and Coefficient Estimation in Multivariate Linear Regression

    Full text link
    In this paper, we study convex optimization methods for computing the trace norm regularized least squares estimate in multivariate linear regression. The so-called factor estimation and selection (FES) method, recently proposed by Yuan et al. [22], conducts parameter estimation and factor selection simultaneously and have been shown to enjoy nice properties in both large and finite samples. To compute the estimates, however, can be very challenging in practice because of the high dimensionality and the trace norm constraint. In this paper, we explore a variant of Nesterov's smooth method [20] and interior point methods for computing the penalized least squares estimate. The performance of these methods is then compared using a set of randomly generated instances. We show that the variant of Nesterov's smooth method [20] generally outperforms the interior point method implemented in SDPT3 version 4.0 (beta) [19] substantially . Moreover, the former method is much more memory efficient.Comment: 27 page
    • 

    corecore