We analyze the bit complexity of efficient algorithms for fundamental
optimization problems, such as linear regression, p-norm regression, and
linear programming (LP). State-of-the-art algorithms are iterative, and in
terms of the number of arithmetic operations, they match the current time
complexity of multiplying two n-by-n matrices (up to polylogarithmic
factors). However, previous work has typically assumed infinite precision
arithmetic, and due to complicated inverse maintenance techniques, the actual
running times of these algorithms are unknown. To settle the running time and
bit complexity of these algorithms, we demonstrate that a core common
subroutine, known as \emph{inverse maintenance}, is backward-stable.
Additionally, we show that iterative approaches for solving constrained
weighted regression problems can be accomplished with bounded-error
pre-conditioners. Specifically, we prove that linear programs can be solved
approximately in matrix multiplication time multiplied by polylog factors that
depend on the condition number κ of the matrix and the inner and outer
radius of the LP problem. p-norm regression can be solved approximately in
matrix multiplication time multiplied by polylog factors in κ. Lastly,
linear regression can be solved approximately in input-sparsity time multiplied
by polylog factors in κ. Furthermore, we present results for achieving
lower than matrix multiplication time for p-norm regression by utilizing
faster solvers for sparse linear systems.Comment: 71 page