11,293 research outputs found
A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix
Following the breakthrough work of Tardos in the bit-complexity model,
Vavasis and Ye gave the first exact algorithm for linear programming in the
real model of computation with running time depending only on the constraint
matrix. For solving a linear program (LP) , Vavasis and Ye developed a primal-dual
interior point method using a 'layered least squares' (LLS) step, and showed
that iterations suffice to solve (LP)
exactly, where is a condition measure controlling the size of
solutions to linear systems related to .
Monteiro and Tsuchiya, noting that the central path is invariant under
rescalings of the columns of and , asked whether there exists an LP
algorithm depending instead on the measure , defined as the
minimum value achievable by a column rescaling of ,
and gave strong evidence that this should be the case. We resolve this open
question affirmatively.
Our first main contribution is an time algorithm which
works on the linear matroid of to compute a nearly optimal diagonal
rescaling satisfying . This
algorithm also allows us to approximate the value of up to a
factor . As our second main contribution, we develop a
scaling invariant LLS algorithm, together with a refined potential function
based analysis for LLS algorithms in general. With this analysis, we derive an
improved iteration bound for
optimally solving (LP) using our algorithm. The same argument also yields a
factor improvement on the iteration complexity bound of the original
Vavasis-Ye algorithm
A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix
Following the breakthrough work of Tardos (Oper. Res. '86) in the bit-complexity model, Vavasis and Ye (Math. Prog. '96) gave the first exact algorithm for linear programming in the real model of computation with running time depending only on the constraint matrix. For solving a linear program (LP) max cx, Ax = b, x ≥ 0, A g m × n, Vavasis and Ye developed a primal-dual interior point method using a g€layered least squares' (LLS) step, and showed that O(n3.5 log(χA+n)) iterations suffice to solve (LP) exactly, where χA is a condition measure controlling the size of solutions to linear systems related to A. Monteiro and Tsuchiya (SIAM J. Optim. '03), noting that the central path is invariant under rescalings of the columns of A and c, asked whether there exists an LP algorithm depending instead on the measure χA∗, defined as the minimum χAD value achievable by a column rescaling AD of A, and gave strong evidence that this should be the case. We resolve this open question affirmatively. Our first main contribution is an O(m2 n2 + n3) time algorithm which works on the linear matroid of A to compute a nearly optimal diagonal rescaling D satisfying χAD ≤ n(χ∗)3. This algorithm also allows us to approximate the value of χA up to a factor n (χ∗)2. This result is in (surprising) contrast to that of Tunçel (Math. Prog. '99), who showed NP-hardness for approximating χA to within 2poly(rank(A)). The key insight for our algorithm is to work with ratios gi/gj of circuits of A - i.e., minimal linear dependencies Ag=0 - which allow us to approximate the value of χA∗ by a maximum geometric mean cycle computation in what we call the g€circuit ratio digraph' of A. While this resolves Monteiro and Tsuchiya's question by appropriate preprocessing, it falls short of providing either a truly scaling invariant algorithm or an improvement upon the base LLS analysis. In this vein, as our second main contribution we develop a scaling invariant LLS algorithm, which uses and dynamically maintains improving estimates of the circuit ratio digraph, together with a refined potential function based analysis for LLS algorithms in general. With this analysis, we derive an improved O(n2.5 lognlog(χA∗+n)) iteration bound for optimally solving (LP) using our algorithm. The same argument also yields a factor n/logn improvement on the iteration complexity bound of the original Vavasis-Ye algorithm
Eigenvector Synchronization, Graph Rigidity and the Molecule Problem
The graph realization problem has received a great deal of attention in
recent years, due to its importance in applications such as wireless sensor
networks and structural biology. In this paper, we extend on previous work and
propose the 3D-ASAP algorithm, for the graph realization problem in
, given a sparse and noisy set of distance measurements. 3D-ASAP
is a divide and conquer, non-incremental and non-iterative algorithm, which
integrates local distance information into a global structure determination.
Our approach starts with identifying, for every node, a subgraph of its 1-hop
neighborhood graph, which can be accurately embedded in its own coordinate
system. In the noise-free case, the computed coordinates of the sensors in each
patch must agree with their global positioning up to some unknown rigid motion,
that is, up to translation, rotation and possibly reflection. In other words,
to every patch there corresponds an element of the Euclidean group Euc(3) of
rigid transformations in , and the goal is to estimate the group
elements that will properly align all the patches in a globally consistent way.
Furthermore, 3D-ASAP successfully incorporates information specific to the
molecule problem in structural biology, in particular information on known
substructures and their orientation. In addition, we also propose 3D-SP-ASAP, a
faster version of 3D-ASAP, which uses a spectral partitioning algorithm as a
preprocessing step for dividing the initial graph into smaller subgraphs. Our
extensive numerical simulations show that 3D-ASAP and 3D-SP-ASAP are very
robust to high levels of noise in the measured distances and to sparse
connectivity in the measurement graph, and compare favorably to similar
state-of-the art localization algorithms.Comment: 49 pages, 8 figure
A Quantum Interior Point Method for LPs and SDPs
We present a quantum interior point method with worst case running time
for
SDPs and for LPs, where the output of our algorithm is a pair of matrices
that are -optimal -approximate SDP solutions. The factor
is at most for SDPs and for LP's, and is
an upper bound on the condition number of the intermediate solution matrices.
For the case where the intermediate matrices for the interior point method are
well conditioned, our method provides a polynomial speedup over the best known
classical SDP solvers and interior point based LP solvers, which have a worst
case running time of and respectively. Our results
build upon recently developed techniques for quantum linear algebra and pave
the way for the development of quantum algorithms for a variety of applications
in optimization and machine learning.Comment: 32 page
Block Coordinate Descent for Sparse NMF
Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data
analysis. An important variant is the sparse NMF problem which arises when we
explicitly require the learnt features to be sparse. A natural measure of
sparsity is the L norm, however its optimization is NP-hard. Mixed norms,
such as L/L measure, have been shown to model sparsity robustly, based
on intuitive attributes that such measures need to satisfy. This is in contrast
to computationally cheaper alternatives such as the plain L norm. However,
present algorithms designed for optimizing the mixed norm L/L are slow
and other formulations for sparse NMF have been proposed such as those based on
L and L norms. Our proposed algorithm allows us to solve the mixed norm
sparsity constraints while not sacrificing computation time. We present
experimental evidence on real-world datasets that shows our new algorithm
performs an order of magnitude faster compared to the current state-of-the-art
solvers optimizing the mixed norm and is suitable for large-scale datasets
Probabilistic analysis of a differential equation for linear programming
In this paper we address the complexity of solving linear programming
problems with a set of differential equations that converge to a fixed point
that represents the optimal solution. Assuming a probabilistic model, where the
inputs are i.i.d. Gaussian variables, we compute the distribution of the
convergence rate to the attracting fixed point. Using the framework of Random
Matrix Theory, we derive a simple expression for this distribution in the
asymptotic limit of large problem size. In this limit, we find that the
distribution of the convergence rate is a scaling function, namely it is a
function of one variable that is a combination of three parameters: the number
of variables, the number of constraints and the convergence rate, rather than a
function of these parameters separately. We also estimate numerically the
distribution of computation times, namely the time required to reach a vicinity
of the attracting fixed point, and find that it is also a scaling function.
Using the problem size dependence of the distribution functions, we derive high
probability bounds on the convergence rates and on the computation times.Comment: 1+37 pages, latex, 5 eps figures. Version accepted for publication in
the Journal of Complexity. Changes made: Presentation reorganized for
clarity, expanded discussion of measure of complexity in the non-asymptotic
regime (added a new section
A geometric method for model reduction of biochemical networks with polynomial rate functions
Model reduction of biochemical networks relies on the knowledge of slow and
fast variables. We provide a geometric method, based on the Newton polytope, to
identify slow variables of a biochemical network with polynomial rate
functions. The gist of the method is the notion of tropical equilibration that
provides approximate descriptions of slow invariant manifolds. Compared to
extant numerical algorithms such as the intrinsic low dimensional manifold
method, our approach is symbolic and utilizes orders of magnitude instead of
precise values of the model parameters. Application of this method to a large
collection of biochemical network models supports the idea that the number of
dynamical variables in minimal models of cell physiology can be small, in spite
of the large number of molecular regulatory actors
Parameter Selection and Pre-Conditioning for a Graph Form Solver
In a recent paper, Parikh and Boyd describe a method for solving a convex
optimization problem, where each iteration involves evaluating a proximal
operator and projection onto a subspace. In this paper we address the critical
practical issues of how to select the proximal parameter in each iteration, and
how to scale the original problem variables, so as the achieve reliable
practical performance. The resulting method has been implemented as an
open-source software package called POGS (Proximal Graph Solver), that targets
multi-core and GPU-based systems, and has been tested on a wide variety of
practical problems. Numerical results show that POGS can solve very large
problems (with, say, more than a billion coefficients in the data), to modest
accuracy in a few tens of seconds. As just one example, a radiation treatment
planning problem with around 100 million coefficients in the data can be solved
in a few seconds, as compared to around one hour with an interior-point method.Comment: 28 pages, 1 figure, 1 open source implementatio
- …