501 research outputs found

    Interior point method in tensor optimal transport

    Full text link
    We study a tensor optimal transport (TOT) problem for d2d\ge 2 discrete measures. This is a linear programming problem on dd-tensors. We introduces an interior point method (ipm) for dd-TOT with a corresponding barrier function. Using a "short-step" ipm following central path within ε\varepsilon precision we estimate the number of iterations.Comment: Corrected typos and added a short additional subsection, 11 page

    Solving symmetric indefinite systems in an interior-point method for second order cone programming

    Get PDF
    Many optimization problems can be formulated as second order cone programming (SOCP) problems. Theoretical results show that applying interior-point method (IPM) to SOCP has global polynomial convergence. However, various stability issues arise in the implementation of IPM. The standard normal equation based implementation of IPM encounters stability problems in the computation of search direction. In this paper, an augmented system approach is proposed to overcome the stability problems. Numerical experiments show that the new approach can improve the stability.Singapore-MIT Alliance (SMA

    Full Newton Step Interior Point Method for Linear Complementarity Problem Over Symmetric Cones

    Get PDF
    In this thesis, we present a new Feasible Interior-Point Method (IPM) for Linear Complementarity Problem (LPC) over Symmetric Cones. The advantage of this method lies in that it uses full Newton-steps, thus, avoiding the calculation of the step size at each iteration. By suitable choice of parameters we prove the global convergence of iterates which always stay in the the central path neighborhood. A global convergence of the method is proved and an upper bound for the number of iterations necessary to find ε-approximate solution of the problem is presented

    Adapting the interior point method for the solution of linear programs on high performance computers

    Get PDF
    In this paper we describe a unified algorithmic framework for the interior point method (IPM) of solving Linear Programs (LPs) which allows us to adapt it over a range of high performance computer architectures. We set out the reasons as to why IPM makes better use of high performance computer architecture than the sparse simplex method. In the inner iteration of the IPM a search direction is computed using Newton or higher order methods. Computationally this involves solving a sparse symmetric positive definite (SSPD) system of equations. The choice of direct and indirect methods for the solution of this system and the design of data structures to take advantage of coarse grain parallel and massively parallel computer architectures are considered in detail. Finally, we present experimental results of solving NETLIB test problems on examples of these architectures and put forward arguments as to why integration of the system within sparse simplex is beneficial

    An Unsupervised Learning-Based Approach for Symbol-Level-Precoding

    Get PDF
    This paper proposes an unsupervised learning-based precoding framework that trains deep neural networks (DNNs) with no target labels by unfolding an interior point method (IPM) proximal `log' barrier function. The proximal `log' barrier function is derived from the strict power minimization formulation subject to signal-to-interference-plus-noise ratio (SINR) constraint. The proposed scheme exploits the known interference via symbol-level precoding (SLP) to minimize the transmit power and is named strict Symbol-Level-Precoding deep network (SLP-SDNet). The results show that SLP-SDNet outperforms the conventional block-level-precoding (Conventional BLP) scheme while achieving near-optimal performance faster than the SLP optimization-based approac
    corecore