27 research outputs found
A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix
Following the breakthrough work of Tardos (Oper. Res. '86) in the bit-complexity model, Vavasis and Ye (Math. Prog. '96) gave the first exact algorithm for linear programming in the real model of computation with running time depending only on the constraint matrix. For solving a linear program (LP) max cx, Ax = b, x â„ 0, A g m Ă n, Vavasis and Ye developed a primal-dual interior point method using a gâŹlayered least squares' (LLS) step, and showed that O(n3.5 log(ÏA+n)) iterations suffice to solve (LP) exactly, where ÏA is a condition measure controlling the size of solutions to linear systems related to A. Monteiro and Tsuchiya (SIAM J. Optim. '03), noting that the central path is invariant under rescalings of the columns of A and c, asked whether there exists an LP algorithm depending instead on the measure ÏAâ, defined as the minimum ÏAD value achievable by a column rescaling AD of A, and gave strong evidence that this should be the case. We resolve this open question affirmatively. Our first main contribution is an O(m2 n2 + n3) time algorithm which works on the linear matroid of A to compute a nearly optimal diagonal rescaling D satisfying ÏAD †n(Ïâ)3. This algorithm also allows us to approximate the value of ÏA up to a factor n (Ïâ)2. This result is in (surprising) contrast to that of Tunçel (Math. Prog. '99), who showed NP-hardness for approximating ÏA to within 2poly(rank(A)). The key insight for our algorithm is to work with ratios gi/gj of circuits of A - i.e., minimal linear dependencies Ag=0 - which allow us to approximate the value of ÏAâ by a maximum geometric mean cycle computation in what we call the gâŹcircuit ratio digraph' of A. While this resolves Monteiro and Tsuchiya's question by appropriate preprocessing, it falls short of providing either a truly scaling invariant algorithm or an improvement upon the base LLS analysis. In this vein, as our second main contribution we develop a scaling invariant LLS algorithm, which uses and dynamically maintains improving estimates of the circuit ratio digraph, together with a refined potential function based analysis for LLS algorithms in general. With this analysis, we derive an improved O(n2.5 lognlog(ÏAâ+n)) iteration bound for optimally solving (LP) using our algorithm. The same argument also yields a factor n/logn improvement on the iteration complexity bound of the original Vavasis-Ye algorithm
A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix
Following the breakthrough work of Tardos in the bit-complexity model,
Vavasis and Ye gave the first exact algorithm for linear programming in the
real model of computation with running time depending only on the constraint
matrix. For solving a linear program (LP) , Vavasis and Ye developed a primal-dual
interior point method using a 'layered least squares' (LLS) step, and showed
that iterations suffice to solve (LP)
exactly, where is a condition measure controlling the size of
solutions to linear systems related to .
Monteiro and Tsuchiya, noting that the central path is invariant under
rescalings of the columns of and , asked whether there exists an LP
algorithm depending instead on the measure , defined as the
minimum value achievable by a column rescaling of ,
and gave strong evidence that this should be the case. We resolve this open
question affirmatively.
Our first main contribution is an time algorithm which
works on the linear matroid of to compute a nearly optimal diagonal
rescaling satisfying . This
algorithm also allows us to approximate the value of up to a
factor . As our second main contribution, we develop a
scaling invariant LLS algorithm, together with a refined potential function
based analysis for LLS algorithms in general. With this analysis, we derive an
improved iteration bound for
optimally solving (LP) using our algorithm. The same argument also yields a
factor improvement on the iteration complexity bound of the original
Vavasis-Ye algorithm
Revisiting Tardos's framework for linear programming: faster exact solutions using approximate solvers
In breakthrough work, Tardos (Oper. Res. â86) gave a proximity based framework for solving linear programming (LP) in time depending only on the constraint matrix in the bit complexity model. In Tardosâs framework, one reduces solving the LP minâšc, xâ©, Ax = b, x â„ 0, A â Z mĂn, to solving O(nm) LPs in A having small integer coefficient objectives and right-hand sides using any exact LP algorithm. This gives rise to an LP algorithm in time poly(n, m log âA), where âA is the largest subdeterminant of A. A significant extension to the real model of computation was given by Vavasis and Ye (Math. Prog. â96), giving a specialized interior point method that runs in time poly(n, m, log ÂŻÏA), depending on Stewartâs ÏÂŻA, a well-studied condition number. In this work, we extend Tardosâs original framework to obtain such a running time dependence. In particular, we replace the exact LP solves with approximate ones, enabling us to directly leverage the tremendous recent algorithmic progress for approximate linear programming. More precisely, we show that the fundamental âaccuracyâ needed to exactly solve any LP in A is inverse polynomial in n and log ÂŻÏA. Plugging in the recent algorithm of van den Brand (SODA â20), our method computes an optimal primal and dual solution using O(mnÏ+1+o(1) log( ÂŻÏA + n)) arithmetic operations, outperforming the specialized interior point method of Vavasis and Ye and its recent improvement by Dadush et al (STOC â20). By applying the preprocessing algorithm of the latter paper, the dependence can also be reduced from ÂŻÏA to ÂŻÏ â A, the minimum value of ÂŻÏAD attainable via column rescalings. Our framework is applicable to achieve the poly(n, m, log ÂŻÏ â A) bound using essentially any weakly polynomial LP algorithm, such as the ellipsoid method. At a technical level, our framework combines together approximate LP solutions to compute exact ones, making use of constructive proximity theoremsâwhich bound the distance between solutions of ânearbyâ LPsâto keep the required accuracy low
Pre-Conditioners and Relations between Different Measures of Conditioning for Conic Linear Systems
In recent years, new and powerful research into "condition numbers" for convex optimization has been developed, aimed at capturing the intuitive notion of problem behavior. This research has been shown to be important in studying the efficiency of algorithms, including interior-point algorithms, for convex optimization as well as other behavioral characteristics of these problems such as problem geometry, deformation under data perturbation, etc. This paper studies measures of conditioning for a conic linear system of the form (FPd): Ax = b, x E Cx, whose data is d = (A, b). We present a new measure of conditioning, denoted pd, and we show implications of lid for problem geometry and algorithm complexity, and demonstrate that the value of = id is independent of the specific data representation of (FPd). We then prove certain relations among a variety of condition measures for (FPd), including ld, pad, Xd, and C(d). We discuss some drawbacks of using the condition number C(d) as the sole measure of conditioning of a conic linear system, and we then introduce the notion of a "pre-conditioner" for (FPd) which results in an equivalent formulation (FPj) of (FPd) with a better condition number C(d). We characterize the best such pre-conditioner and provide an algorithm for constructing an equivalent data instance d whose condition number C(d) is within a known factor of the best possible
Exact linear programming circuits, curvature, and diameter
We study Linear Programming (LP) and present novel algorithms. In particular, we study LP in the context of circuits, which are support-minimal vectors of linear spaces. Our results will be stated in terms of the circuit imbalance (CI), which is the worst-case ratio of nonzero entries of circuits and whose properties we study in detail. We present following results with logarithmic dependency on CI. (i) A scaling-invariant Interior-Point Method, which solves LP in time that is polynomial in the dimensions, answering an open question by Monteiro-Tsuchiya in the affirmative. This closes a long line of work by Vavasis-Ye and Monteiro-Tsuchiya; (ii)We introduce a new polynomial-time path-following interior point method where the number of iterations admits a singly exponential upper bound. This complements recent results, that path-following method must take at least exponentially many iterations; (iii)We further provide similar upper bounds on a natural notion of curvature of the central path; (iv) A black-box algorithm that requires only quadratically many calls to an approximate LP solver to solve LP exactly. This significantly strengthens the framework by Tardos, which requires exact solvers and whose runtime is logarithmic in the maximum subdeterminant of the constraint matrix. The maximum subdeterminant is exponentially bigger than CI, already for fundamental combinatorial problems such as matchings; (v) Furthermore, we obtain a circuit diameter that is quadratic in the number of variables, giving the first polynomial bound for general LP where CI is exponential. Unlike in the simplex method, one does not have to augment around the edges of the polyhedron: Augmentations can be in any circuit direction; (vi) Lastly, we present an accelerated version of the NewtonâDinkelbach method, which extends the black-box framework to certain classes of fractional and parametric optimization problems. Using the Bregman divergence as a potential in conjunction with combinatorial arguments, we obtain improved runtimes over the non-accelerated version
Geometric aspects of linear programming : shadow paths, central paths, and a cutting plane method
Most everyday algorithms are well-understood; predictions made theoretically
about them closely match what we observe in practice. This is not the case for
all algorithms, and some algorithms are still poorly understood on a theoretical level.
This is the case for many algorithms used for solving optimization problems from operations reserach.
Solving such optimization problems is essential in many industries and is done every day.
One important example of such optimization problems are Linear Programming problems.
There are a couple of different algorithms that are popular in practice,
among which is one which has been in use for almost 80 years.
Nonetheless, our theoretical understanding of these algorithms is limited.
This thesis makes progress towards a better understanding of these key algorithms
for lineair programming, among which are the simplex method, interior point methods,
and cutting plane methods
Recommended from our members
Optimization and Applications
Proceedings of a workshop devoted to optimization problems, their theory and resolution, and above all applications of them. The topics covered existence and stability of solutions; design, analysis, development and implementation of algorithms; applications in mechanics, telecommunications, medicine, operations research
Polynomial-Time Amoeba Neighborhood Membership and Faster Localized Solving
We derive efficient algorithms for coarse approximation of algebraic
hypersurfaces, useful for estimating the distance between an input polynomial
zero set and a given query point. Our methods work best on sparse polynomials
of high degree (in any number of variables) but are nevertheless completely
general. The underlying ideas, which we take the time to describe in an
elementary way, come from tropical geometry. We thus reduce a hard algebraic
problem to high-precision linear optimization, proving new upper and lower
complexity estimates along the way.Comment: 15 pages, 9 figures. Submitted to a conference proceeding