27 research outputs found

    Stable computation of search directions for near-degenerate linear programming problems

    Full text link

    A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix

    Get PDF
    Following the breakthrough work of Tardos (Oper. Res. '86) in the bit-complexity model, Vavasis and Ye (Math. Prog. '96) gave the first exact algorithm for linear programming in the real model of computation with running time depending only on the constraint matrix. For solving a linear program (LP) max cx, Ax = b, x ≄ 0, A g m × n, Vavasis and Ye developed a primal-dual interior point method using a g€layered least squares' (LLS) step, and showed that O(n3.5 log(χA+n)) iterations suffice to solve (LP) exactly, where χA is a condition measure controlling the size of solutions to linear systems related to A. Monteiro and Tsuchiya (SIAM J. Optim. '03), noting that the central path is invariant under rescalings of the columns of A and c, asked whether there exists an LP algorithm depending instead on the measure χA∗, defined as the minimum χAD value achievable by a column rescaling AD of A, and gave strong evidence that this should be the case. We resolve this open question affirmatively. Our first main contribution is an O(m2 n2 + n3) time algorithm which works on the linear matroid of A to compute a nearly optimal diagonal rescaling D satisfying χAD ≀ n(χ∗)3. This algorithm also allows us to approximate the value of χA up to a factor n (χ∗)2. This result is in (surprising) contrast to that of Tunçel (Math. Prog. '99), who showed NP-hardness for approximating χA to within 2poly(rank(A)). The key insight for our algorithm is to work with ratios gi/gj of circuits of A - i.e., minimal linear dependencies Ag=0 - which allow us to approximate the value of χA∗ by a maximum geometric mean cycle computation in what we call the g€circuit ratio digraph' of A. While this resolves Monteiro and Tsuchiya's question by appropriate preprocessing, it falls short of providing either a truly scaling invariant algorithm or an improvement upon the base LLS analysis. In this vein, as our second main contribution we develop a scaling invariant LLS algorithm, which uses and dynamically maintains improving estimates of the circuit ratio digraph, together with a refined potential function based analysis for LLS algorithms in general. With this analysis, we derive an improved O(n2.5 lognlog(χA∗+n)) iteration bound for optimally solving (LP) using our algorithm. The same argument also yields a factor n/logn improvement on the iteration complexity bound of the original Vavasis-Ye algorithm

    A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix

    Get PDF
    Following the breakthrough work of Tardos in the bit-complexity model, Vavasis and Ye gave the first exact algorithm for linear programming in the real model of computation with running time depending only on the constraint matrix. For solving a linear program (LP) max⁡ c⊀x, Ax=b, x≄0, A∈Rm×n\max\, c^\top x,\: Ax = b,\: x \geq 0,\: A \in \mathbb{R}^{m \times n}, Vavasis and Ye developed a primal-dual interior point method using a 'layered least squares' (LLS) step, and showed that O(n3.5log⁥(χˉA+n))O(n^{3.5} \log (\bar{\chi}_A+n)) iterations suffice to solve (LP) exactly, where χˉA\bar{\chi}_A is a condition measure controlling the size of solutions to linear systems related to AA. Monteiro and Tsuchiya, noting that the central path is invariant under rescalings of the columns of AA and cc, asked whether there exists an LP algorithm depending instead on the measure χˉA∗\bar{\chi}^*_A, defined as the minimum χˉAD\bar{\chi}_{AD} value achievable by a column rescaling ADAD of AA, and gave strong evidence that this should be the case. We resolve this open question affirmatively. Our first main contribution is an O(m2n2+n3)O(m^2 n^2 + n^3) time algorithm which works on the linear matroid of AA to compute a nearly optimal diagonal rescaling DD satisfying χˉAD≀n(χˉ∗)3\bar{\chi}_{AD} \leq n(\bar{\chi}^*)^3. This algorithm also allows us to approximate the value of χˉA\bar{\chi}_A up to a factor n(χˉ∗)2n (\bar{\chi}^*)^2. As our second main contribution, we develop a scaling invariant LLS algorithm, together with a refined potential function based analysis for LLS algorithms in general. With this analysis, we derive an improved O(n2.5log⁥nlog⁥(χˉA∗+n))O(n^{2.5} \log n\log (\bar{\chi}^*_A+n)) iteration bound for optimally solving (LP) using our algorithm. The same argument also yields a factor n/log⁥nn/\log n improvement on the iteration complexity bound of the original Vavasis-Ye algorithm

    Revisiting Tardos's framework for linear programming: faster exact solutions using approximate solvers

    Get PDF
    In breakthrough work, Tardos (Oper. Res. ’86) gave a proximity based framework for solving linear programming (LP) in time depending only on the constraint matrix in the bit complexity model. In Tardos’s framework, one reduces solving the LP min⟹c, x⟩, Ax = b, x ≄ 0, A ∈ Z m×n, to solving O(nm) LPs in A having small integer coefficient objectives and right-hand sides using any exact LP algorithm. This gives rise to an LP algorithm in time poly(n, m log ∆A), where ∆A is the largest subdeterminant of A. A significant extension to the real model of computation was given by Vavasis and Ye (Math. Prog. ’96), giving a specialized interior point method that runs in time poly(n, m, log ÂŻÏ‡A), depending on Stewart’s Ï‡ÂŻA, a well-studied condition number. In this work, we extend Tardos’s original framework to obtain such a running time dependence. In particular, we replace the exact LP solves with approximate ones, enabling us to directly leverage the tremendous recent algorithmic progress for approximate linear programming. More precisely, we show that the fundamental “accuracy” needed to exactly solve any LP in A is inverse polynomial in n and log ÂŻÏ‡A. Plugging in the recent algorithm of van den Brand (SODA ’20), our method computes an optimal primal and dual solution using O(mnω+1+o(1) log( ÂŻÏ‡A + n)) arithmetic operations, outperforming the specialized interior point method of Vavasis and Ye and its recent improvement by Dadush et al (STOC ’20). By applying the preprocessing algorithm of the latter paper, the dependence can also be reduced from ÂŻÏ‡A to ÂŻÏ‡ ∗ A, the minimum value of ÂŻÏ‡AD attainable via column rescalings. Our framework is applicable to achieve the poly(n, m, log ÂŻÏ‡ ∗ A) bound using essentially any weakly polynomial LP algorithm, such as the ellipsoid method. At a technical level, our framework combines together approximate LP solutions to compute exact ones, making use of constructive proximity theorems—which bound the distance between solutions of “nearby” LPs—to keep the required accuracy low

    Pre-Conditioners and Relations between Different Measures of Conditioning for Conic Linear Systems

    Get PDF
    In recent years, new and powerful research into "condition numbers" for convex optimization has been developed, aimed at capturing the intuitive notion of problem behavior. This research has been shown to be important in studying the efficiency of algorithms, including interior-point algorithms, for convex optimization as well as other behavioral characteristics of these problems such as problem geometry, deformation under data perturbation, etc. This paper studies measures of conditioning for a conic linear system of the form (FPd): Ax = b, x E Cx, whose data is d = (A, b). We present a new measure of conditioning, denoted pd, and we show implications of lid for problem geometry and algorithm complexity, and demonstrate that the value of = id is independent of the specific data representation of (FPd). We then prove certain relations among a variety of condition measures for (FPd), including ld, pad, Xd, and C(d). We discuss some drawbacks of using the condition number C(d) as the sole measure of conditioning of a conic linear system, and we then introduce the notion of a "pre-conditioner" for (FPd) which results in an equivalent formulation (FPj) of (FPd) with a better condition number C(d). We characterize the best such pre-conditioner and provide an algorithm for constructing an equivalent data instance d whose condition number C(d) is within a known factor of the best possible

    Exact linear programming circuits, curvature, and diameter

    Get PDF
    We study Linear Programming (LP) and present novel algorithms. In particular, we study LP in the context of circuits, which are support-minimal vectors of linear spaces. Our results will be stated in terms of the circuit imbalance (CI), which is the worst-case ratio of nonzero entries of circuits and whose properties we study in detail. We present following results with logarithmic dependency on CI. (i) A scaling-invariant Interior-Point Method, which solves LP in time that is polynomial in the dimensions, answering an open question by Monteiro-Tsuchiya in the affirmative. This closes a long line of work by Vavasis-Ye and Monteiro-Tsuchiya; (ii)We introduce a new polynomial-time path-following interior point method where the number of iterations admits a singly exponential upper bound. This complements recent results, that path-following method must take at least exponentially many iterations; (iii)We further provide similar upper bounds on a natural notion of curvature of the central path; (iv) A black-box algorithm that requires only quadratically many calls to an approximate LP solver to solve LP exactly. This significantly strengthens the framework by Tardos, which requires exact solvers and whose runtime is logarithmic in the maximum subdeterminant of the constraint matrix. The maximum subdeterminant is exponentially bigger than CI, already for fundamental combinatorial problems such as matchings; (v) Furthermore, we obtain a circuit diameter that is quadratic in the number of variables, giving the first polynomial bound for general LP where CI is exponential. Unlike in the simplex method, one does not have to augment around the edges of the polyhedron: Augmentations can be in any circuit direction; (vi) Lastly, we present an accelerated version of the Newton–Dinkelbach method, which extends the black-box framework to certain classes of fractional and parametric optimization problems. Using the Bregman divergence as a potential in conjunction with combinatorial arguments, we obtain improved runtimes over the non-accelerated version

    Geometric aspects of linear programming : shadow paths, central paths, and a cutting plane method

    Get PDF
    Most everyday algorithms are well-understood; predictions made theoretically about them closely match what we observe in practice. This is not the case for all algorithms, and some algorithms are still poorly understood on a theoretical level. This is the case for many algorithms used for solving optimization problems from operations reserach. Solving such optimization problems is essential in many industries and is done every day. One important example of such optimization problems are Linear Programming problems. There are a couple of different algorithms that are popular in practice, among which is one which has been in use for almost 80 years. Nonetheless, our theoretical understanding of these algorithms is limited. This thesis makes progress towards a better understanding of these key algorithms for lineair programming, among which are the simplex method, interior point methods, and cutting plane methods

    Polynomial-Time Amoeba Neighborhood Membership and Faster Localized Solving

    Full text link
    We derive efficient algorithms for coarse approximation of algebraic hypersurfaces, useful for estimating the distance between an input polynomial zero set and a given query point. Our methods work best on sparse polynomials of high degree (in any number of variables) but are nevertheless completely general. The underlying ideas, which we take the time to describe in an elementary way, come from tropical geometry. We thus reduce a hard algebraic problem to high-precision linear optimization, proving new upper and lower complexity estimates along the way.Comment: 15 pages, 9 figures. Submitted to a conference proceeding
    corecore