4 research outputs found

    Strongly Polynomial Frame Scaling to High Precision

    Full text link
    The frame scaling problem is: given vectors U:={u1,...,un}RdU := \{u_{1}, ..., u_{n} \} \subseteq \mathbb{R}^{d}, marginals cR++nc \in \mathbb{R}^{n}_{++}, and precision ε>0\varepsilon > 0, find left and right scalings LRd×d,rRnL \in \mathbb{R}^{d \times d}, r \in \mathbb{R}^n such that (v1,,vn):=(Lu1r1,,Lunrn)(v_1,\dots,v_n) := (Lu_1 r_1,\dots,Lu_nr_n) simultaneously satisfies i=1nviviT=Id\sum_{i=1}^n v_i v_i^{\mathsf{T}} = I_d and vj22=cj,j[n]\|v_{j}\|_{2}^{2} = c_{j}, \forall j \in [n], up to error ε\varepsilon. This problem has appeared in a variety of fields throughout linear algebra and computer science. In this work, we give a strongly polynomial algorithm for frame scaling with log(1/ε)\log(1/\varepsilon) convergence. This answers a question of Diakonikolas, Tzamos and Kane (STOC 2023), who gave the first strongly polynomial randomized algorithm with poly(1/ε)(1/\varepsilon) convergence for the special case c=dn1nc = \frac{d}{n} 1_{n}. Our algorithm is deterministic, applies for general cR++nc \in \mathbb{R}^{n}_{++}, and requires O(n3log(n/ε))O(n^{3} \log(n/\varepsilon)) iterations as compared to O(n5d11/ε5)O(n^{5} d^{11}/\varepsilon^{5}) iterations of DTK. By lifting the framework of Linial, Samorodnitsky and Wigderson (Combinatorica 2000) for matrix scaling to frames, we are able to simplify both the algorithm and analysis. Our main technical contribution is to generalize the potential analysis of LSW to the frame setting and compute an update step in strongly polynomial time that achieves geometric progress in each iteration. In fact, we can adapt our results to give an improved analysis of strongly polynomial matrix scaling, reducing the O(n5log(n/ε))O(n^{5} \log(n/\varepsilon)) iteration bound of LSW to O(n3log(n/ε))O(n^{3} \log(n/\varepsilon)). Additionally, we prove a novel bound on the size of approximate frame scaling solutions, involving the condition measure χˉ\bar{\chi} studied in the linear programming literature, which may be of independent interest.Comment: Comments welcom

    A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix

    Get PDF
    Following the breakthrough work of Tardos (Oper. Res. '86) in the bit-complexity model, Vavasis and Ye (Math. Prog. '96) gave the first exact algorithm for linear programming in the real model of computation with running time depending only on the constraint matrix. For solving a linear program (LP) max cx, Ax = b, x ≥ 0, A g m × n, Vavasis and Ye developed a primal-dual interior point method using a g€layered least squares' (LLS) step, and showed that O(n3.5 log(χA+n)) iterations suffice to solve (LP) exactly, where χA is a condition measure controlling the size of solutions to linear systems related to A. Monteiro and Tsuchiya (SIAM J. Optim. '03), noting that the central path is invariant under rescalings of the columns of A and c, asked whether there exists an LP algorithm depending instead on the measure χA∗, defined as the minimum χAD value achievable by a column rescaling AD of A, and gave strong evidence that this should be the case. We resolve this open question affirmatively. Our first main contribution is an O(m2 n2 + n3) time algorithm which works on the linear matroid of A to compute a nearly optimal diagonal rescaling D satisfying χAD ≤ n(χ∗)3. This algorithm also allows us to approximate the value of χA up to a factor n (χ∗)2. This result is in (surprising) contrast to that of Tunçel (Math. Prog. '99), who showed NP-hardness for approximating χA to within 2poly(rank(A)). The key insight for our algorithm is to work with ratios gi/gj of circuits of A - i.e., minimal linear dependencies Ag=0 - which allow us to approximate the value of χA∗ by a maximum geometric mean cycle computation in what we call the g€circuit ratio digraph' of A. While this resolves Monteiro and Tsuchiya's question by appropriate preprocessing, it falls short of providing either a truly scaling invariant algorithm or an improvement upon the base LLS analysis. In this vein, as our second main contribution we develop a scaling invariant LLS algorithm, which uses and dynamically maintains improving estimates of the circuit ratio digraph, together with a refined potential function based analysis for LLS algorithms in general. With this analysis, we derive an improved O(n2.5 lognlog(χA∗+n)) iteration bound for optimally solving (LP) using our algorithm. The same argument also yields a factor n/logn improvement on the iteration complexity bound of the original Vavasis-Ye algorithm

    A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix

    Get PDF
    Following the breakthrough work of Tardos in the bit-complexity model, Vavasis and Ye gave the first exact algorithm for linear programming in the real model of computation with running time depending only on the constraint matrix. For solving a linear program (LP) maxcx,Ax=b,x0,ARm×n\max\, c^\top x,\: Ax = b,\: x \geq 0,\: A \in \mathbb{R}^{m \times n}, Vavasis and Ye developed a primal-dual interior point method using a 'layered least squares' (LLS) step, and showed that O(n3.5log(χˉA+n))O(n^{3.5} \log (\bar{\chi}_A+n)) iterations suffice to solve (LP) exactly, where χˉA\bar{\chi}_A is a condition measure controlling the size of solutions to linear systems related to AA. Monteiro and Tsuchiya, noting that the central path is invariant under rescalings of the columns of AA and cc, asked whether there exists an LP algorithm depending instead on the measure χˉA\bar{\chi}^*_A, defined as the minimum χˉAD\bar{\chi}_{AD} value achievable by a column rescaling ADAD of AA, and gave strong evidence that this should be the case. We resolve this open question affirmatively. Our first main contribution is an O(m2n2+n3)O(m^2 n^2 + n^3) time algorithm which works on the linear matroid of AA to compute a nearly optimal diagonal rescaling DD satisfying χˉADn(χˉ)3\bar{\chi}_{AD} \leq n(\bar{\chi}^*)^3. This algorithm also allows us to approximate the value of χˉA\bar{\chi}_A up to a factor n(χˉ)2n (\bar{\chi}^*)^2. As our second main contribution, we develop a scaling invariant LLS algorithm, together with a refined potential function based analysis for LLS algorithms in general. With this analysis, we derive an improved O(n2.5lognlog(χˉA+n))O(n^{2.5} \log n\log (\bar{\chi}^*_A+n)) iteration bound for optimally solving (LP) using our algorithm. The same argument also yields a factor n/lognn/\log n improvement on the iteration complexity bound of the original Vavasis-Ye algorithm

    Exact linear programming circuits, curvature, and diameter

    Get PDF
    We study Linear Programming (LP) and present novel algorithms. In particular, we study LP in the context of circuits, which are support-minimal vectors of linear spaces. Our results will be stated in terms of the circuit imbalance (CI), which is the worst-case ratio of nonzero entries of circuits and whose properties we study in detail. We present following results with logarithmic dependency on CI. (i) A scaling-invariant Interior-Point Method, which solves LP in time that is polynomial in the dimensions, answering an open question by Monteiro-Tsuchiya in the affirmative. This closes a long line of work by Vavasis-Ye and Monteiro-Tsuchiya; (ii)We introduce a new polynomial-time path-following interior point method where the number of iterations admits a singly exponential upper bound. This complements recent results, that path-following method must take at least exponentially many iterations; (iii)We further provide similar upper bounds on a natural notion of curvature of the central path; (iv) A black-box algorithm that requires only quadratically many calls to an approximate LP solver to solve LP exactly. This significantly strengthens the framework by Tardos, which requires exact solvers and whose runtime is logarithmic in the maximum subdeterminant of the constraint matrix. The maximum subdeterminant is exponentially bigger than CI, already for fundamental combinatorial problems such as matchings; (v) Furthermore, we obtain a circuit diameter that is quadratic in the number of variables, giving the first polynomial bound for general LP where CI is exponential. Unlike in the simplex method, one does not have to augment around the edges of the polyhedron: Augmentations can be in any circuit direction; (vi) Lastly, we present an accelerated version of the Newton–Dinkelbach method, which extends the black-box framework to certain classes of fractional and parametric optimization problems. Using the Bregman divergence as a potential in conjunction with combinatorial arguments, we obtain improved runtimes over the non-accelerated version
    corecore