30,148 research outputs found

    The Bit Complexity of Efficient Continuous Optimization

    Full text link
    We analyze the bit complexity of efficient algorithms for fundamental optimization problems, such as linear regression, pp-norm regression, and linear programming (LP). State-of-the-art algorithms are iterative, and in terms of the number of arithmetic operations, they match the current time complexity of multiplying two nn-by-nn matrices (up to polylogarithmic factors). However, previous work has typically assumed infinite precision arithmetic, and due to complicated inverse maintenance techniques, the actual running times of these algorithms are unknown. To settle the running time and bit complexity of these algorithms, we demonstrate that a core common subroutine, known as \emph{inverse maintenance}, is backward-stable. Additionally, we show that iterative approaches for solving constrained weighted regression problems can be accomplished with bounded-error pre-conditioners. Specifically, we prove that linear programs can be solved approximately in matrix multiplication time multiplied by polylog factors that depend on the condition number κ\kappa of the matrix and the inner and outer radius of the LP problem. pp-norm regression can be solved approximately in matrix multiplication time multiplied by polylog factors in κ\kappa. Lastly, linear regression can be solved approximately in input-sparsity time multiplied by polylog factors in κ\kappa. Furthermore, we present results for achieving lower than matrix multiplication time for pp-norm regression by utilizing faster solvers for sparse linear systems.Comment: 71 page

    Structured Semidefinite Programming for Recovering Structured Preconditioners

    Full text link
    We develop a general framework for finding approximately-optimal preconditioners for solving linear systems. Leveraging this framework we obtain improved runtimes for fundamental preconditioning and linear system solving problems including the following. We give an algorithm which, given positive definite K∈Rd×d\mathbf{K} \in \mathbb{R}^{d \times d} with nnz(K)\mathrm{nnz}(\mathbf{K}) nonzero entries, computes an ϵ\epsilon-optimal diagonal preconditioner in time O~(nnz(K)⋅poly(κ⋆,ϵ−1))\widetilde{O}(\mathrm{nnz}(\mathbf{K}) \cdot \mathrm{poly}(\kappa^\star,\epsilon^{-1})), where κ⋆\kappa^\star is the optimal condition number of the rescaled matrix. We give an algorithm which, given M∈Rd×d\mathbf{M} \in \mathbb{R}^{d \times d} that is either the pseudoinverse of a graph Laplacian matrix or a constant spectral approximation of one, solves linear systems in M\mathbf{M} in O~(d2)\widetilde{O}(d^2) time. Our diagonal preconditioning results improve state-of-the-art runtimes of Ω(d3.5)\Omega(d^{3.5}) attained by general-purpose semidefinite programming, and our solvers improve state-of-the-art runtimes of Ω(dω)\Omega(d^{\omega}) where ω>2.3\omega > 2.3 is the current matrix multiplication constant. We attain our results via new algorithms for a class of semidefinite programs (SDPs) we call matrix-dictionary approximation SDPs, which we leverage to solve an associated problem we call matrix-dictionary recovery.Comment: Merge of arXiv:1812.06295 and arXiv:2008.0172
    • …
    corecore