4 research outputs found
Strongly Polynomial Frame Scaling to High Precision
The frame scaling problem is: given vectors , marginals , and precision
, find left and right scalings such that
simultaneously satisfies and
, up to error . This
problem has appeared in a variety of fields throughout linear algebra and
computer science. In this work, we give a strongly polynomial algorithm for
frame scaling with convergence. This answers a question
of Diakonikolas, Tzamos and Kane (STOC 2023), who gave the first strongly
polynomial randomized algorithm with poly convergence for the
special case . Our algorithm is deterministic, applies
for general , and requires iterations as compared to iterations of DTK. By lifting the framework of Linial,
Samorodnitsky and Wigderson (Combinatorica 2000) for matrix scaling to frames,
we are able to simplify both the algorithm and analysis. Our main technical
contribution is to generalize the potential analysis of LSW to the frame
setting and compute an update step in strongly polynomial time that achieves
geometric progress in each iteration. In fact, we can adapt our results to give
an improved analysis of strongly polynomial matrix scaling, reducing the
iteration bound of LSW to . Additionally, we prove a novel bound on the size of
approximate frame scaling solutions, involving the condition measure
studied in the linear programming literature, which may be of
independent interest.Comment: Comments welcom
A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix
Following the breakthrough work of Tardos (Oper. Res. '86) in the bit-complexity model, Vavasis and Ye (Math. Prog. '96) gave the first exact algorithm for linear programming in the real model of computation with running time depending only on the constraint matrix. For solving a linear program (LP) max cx, Ax = b, x ≥ 0, A g m × n, Vavasis and Ye developed a primal-dual interior point method using a g€layered least squares' (LLS) step, and showed that O(n3.5 log(χA+n)) iterations suffice to solve (LP) exactly, where χA is a condition measure controlling the size of solutions to linear systems related to A. Monteiro and Tsuchiya (SIAM J. Optim. '03), noting that the central path is invariant under rescalings of the columns of A and c, asked whether there exists an LP algorithm depending instead on the measure χA∗, defined as the minimum χAD value achievable by a column rescaling AD of A, and gave strong evidence that this should be the case. We resolve this open question affirmatively. Our first main contribution is an O(m2 n2 + n3) time algorithm which works on the linear matroid of A to compute a nearly optimal diagonal rescaling D satisfying χAD ≤ n(χ∗)3. This algorithm also allows us to approximate the value of χA up to a factor n (χ∗)2. This result is in (surprising) contrast to that of Tunçel (Math. Prog. '99), who showed NP-hardness for approximating χA to within 2poly(rank(A)). The key insight for our algorithm is to work with ratios gi/gj of circuits of A - i.e., minimal linear dependencies Ag=0 - which allow us to approximate the value of χA∗ by a maximum geometric mean cycle computation in what we call the g€circuit ratio digraph' of A. While this resolves Monteiro and Tsuchiya's question by appropriate preprocessing, it falls short of providing either a truly scaling invariant algorithm or an improvement upon the base LLS analysis. In this vein, as our second main contribution we develop a scaling invariant LLS algorithm, which uses and dynamically maintains improving estimates of the circuit ratio digraph, together with a refined potential function based analysis for LLS algorithms in general. With this analysis, we derive an improved O(n2.5 lognlog(χA∗+n)) iteration bound for optimally solving (LP) using our algorithm. The same argument also yields a factor n/logn improvement on the iteration complexity bound of the original Vavasis-Ye algorithm
A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix
Following the breakthrough work of Tardos in the bit-complexity model,
Vavasis and Ye gave the first exact algorithm for linear programming in the
real model of computation with running time depending only on the constraint
matrix. For solving a linear program (LP) , Vavasis and Ye developed a primal-dual
interior point method using a 'layered least squares' (LLS) step, and showed
that iterations suffice to solve (LP)
exactly, where is a condition measure controlling the size of
solutions to linear systems related to .
Monteiro and Tsuchiya, noting that the central path is invariant under
rescalings of the columns of and , asked whether there exists an LP
algorithm depending instead on the measure , defined as the
minimum value achievable by a column rescaling of ,
and gave strong evidence that this should be the case. We resolve this open
question affirmatively.
Our first main contribution is an time algorithm which
works on the linear matroid of to compute a nearly optimal diagonal
rescaling satisfying . This
algorithm also allows us to approximate the value of up to a
factor . As our second main contribution, we develop a
scaling invariant LLS algorithm, together with a refined potential function
based analysis for LLS algorithms in general. With this analysis, we derive an
improved iteration bound for
optimally solving (LP) using our algorithm. The same argument also yields a
factor improvement on the iteration complexity bound of the original
Vavasis-Ye algorithm
Exact linear programming circuits, curvature, and diameter
We study Linear Programming (LP) and present novel algorithms. In particular, we study LP in the context of circuits, which are support-minimal vectors of linear spaces. Our results will be stated in terms of the circuit imbalance (CI), which is the worst-case ratio of nonzero entries of circuits and whose properties we study in detail. We present following results with logarithmic dependency on CI. (i) A scaling-invariant Interior-Point Method, which solves LP in time that is polynomial in the dimensions, answering an open question by Monteiro-Tsuchiya in the affirmative. This closes a long line of work by Vavasis-Ye and Monteiro-Tsuchiya; (ii)We introduce a new polynomial-time path-following interior point method where the number of iterations admits a singly exponential upper bound. This complements recent results, that path-following method must take at least exponentially many iterations; (iii)We further provide similar upper bounds on a natural notion of curvature of the central path; (iv) A black-box algorithm that requires only quadratically many calls to an approximate LP solver to solve LP exactly. This significantly strengthens the framework by Tardos, which requires exact solvers and whose runtime is logarithmic in the maximum subdeterminant of the constraint matrix. The maximum subdeterminant is exponentially bigger than CI, already for fundamental combinatorial problems such as matchings; (v) Furthermore, we obtain a circuit diameter that is quadratic in the number of variables, giving the first polynomial bound for general LP where CI is exponential. Unlike in the simplex method, one does not have to augment around the edges of the polyhedron: Augmentations can be in any circuit direction; (vi) Lastly, we present an accelerated version of the Newton–Dinkelbach method, which extends the black-box framework to certain classes of fractional and parametric optimization problems. Using the Bregman divergence as a potential in conjunction with combinatorial arguments, we obtain improved runtimes over the non-accelerated version