501 research outputs found
Interior point method in tensor optimal transport
We study a tensor optimal transport (TOT) problem for discrete
measures. This is a linear programming problem on -tensors. We introduces an
interior point method (ipm) for -TOT with a corresponding barrier function.
Using a "short-step" ipm following central path within precision
we estimate the number of iterations.Comment: Corrected typos and added a short additional subsection, 11 page
Solving symmetric indefinite systems in an interior-point method for second order cone programming
Many optimization problems can be formulated as second order cone programming (SOCP) problems. Theoretical results show that applying interior-point method (IPM) to SOCP has global polynomial convergence. However, various stability issues arise in the implementation of IPM. The standard normal equation based implementation of IPM encounters stability problems in the computation of search direction. In this paper, an augmented system approach is proposed to overcome the stability problems. Numerical experiments show that the new approach can improve the stability.Singapore-MIT Alliance (SMA
Full Newton Step Interior Point Method for Linear Complementarity Problem Over Symmetric Cones
In this thesis, we present a new Feasible Interior-Point Method (IPM) for Linear Complementarity Problem (LPC) over Symmetric Cones. The advantage of this method lies in that it uses full Newton-steps, thus, avoiding the calculation of the step size at each iteration. By suitable choice of parameters we prove the global convergence of iterates which always stay in the the central path neighborhood. A global convergence of the method is proved and an upper bound for the number of iterations necessary to find ε-approximate solution of the problem is presented
Adapting the interior point method for the solution of linear programs on high performance computers
In this paper we describe a unified algorithmic framework for the interior point method (IPM) of solving Linear Programs (LPs) which allows us to adapt it over a range of high performance computer architectures. We set out the reasons as to why IPM makes better use of high performance computer architecture than the sparse simplex method. In the inner iteration of the IPM a search direction is computed using Newton or higher order methods. Computationally this involves solving a sparse symmetric positive definite (SSPD) system of equations. The choice of direct and indirect methods for the solution of this system and the design of data structures to take advantage of coarse grain parallel and massively parallel computer architectures are considered in detail. Finally, we present experimental results of solving NETLIB test problems on examples of these architectures and put forward arguments as to why integration of the system within sparse simplex is beneficial
Recommended from our members
Solving large scale linear programming
The interior point method (IPM) is now well established as a competitive technique for solving very large scale linear programming problems. The leading variant of the interior point method is the primal dual - predictor corrector algorithm due to Mehrotra. The main computational steps of this algorithm are the repeated calculation and solution of a large sparse positive definite system of equations.
We describe an implementation of the predictor corrector IPM algorithm on MasPar, a massively parallel SIMD computer. At the heart of the implemen-tation is a parallel Cholesky factorization algorithm for sparse matrices. Our implementation uses a new scheme of mapping the matrix onto the processor grid of the MasPar, that results in a more efficient Cholesky factorization than previously suggested schemes.
The IPM implementation uses the parallel unit of MasPar to speed up the factorization and other computationally intensive parts of the IPM. An impor-tant part of this implementation is the judicious division of data and computation between the front-end computer, that runs the main IPM algorithm, and the par-allel unit. Performanc
An Unsupervised Learning-Based Approach for Symbol-Level-Precoding
This paper proposes an unsupervised learning-based precoding framework that trains deep neural networks (DNNs) with no target labels by unfolding an interior point method (IPM) proximal `log' barrier function. The proximal `log' barrier function is derived from the strict power minimization formulation subject to signal-to-interference-plus-noise ratio (SINR) constraint. The proposed scheme exploits the known interference via symbol-level precoding (SLP) to minimize the transmit power and is named strict Symbol-Level-Precoding deep network (SLP-SDNet). The results show that SLP-SDNet outperforms the conventional block-level-precoding (Conventional BLP) scheme while achieving near-optimal performance faster than the SLP optimization-based approac
Recommended from our members
Solving large scale linear programming problems
The interior point method (IPM) is now well established as a computationaly com-petitive scheme for solving very large scale linear programming problems. The leading variant of the IPM is the primal dual predictor corrector algorithm due to Mehrotra. The main computational efforts in this algorithm are the repeated calculation and solution of a large sparse positive definite system of equations.
We describe an implementation of this algorithm for vector processors. At the heart of the implementation is a vectorized matrix multiplication and Cholesky factorization for sparse matrices.
We identify the parts where vectorization can be beneficial and discuss in details the merits of alternative vectorization techniques. We show that the best way to utilize a vector processor is by exploiting dense computation within the sparse framework and by unrolling loop operations. We further present an extended definition of supernodes, and describe an implementation based on this new approach. We show that although this approach requires more memory it can increase the scope of dense computation substantially with out adding extra operations.
Performance results on standard industrial test problems and comparison between an algorithm that utilizes the extended supernodes and one that utilizes standard supernodes are presented and discussed
- …