3,577 research outputs found
A simple polynomial-time randomized distributed algorithm for connected row convex constraints
In this paper, we describe a simple randomized algorithm that runs in polynomial time and solves connected row convex (CRC) constraints in distributed settings. CRC constraints generalize many known tractable classes of constraints like 2-SAT and implicational constraints. They can model problems in many domains including temporal reasoning and geometric reasoning; and generally speaking, play the role of “Gaussians” in the logical world. Our simple randomized algorithm for solving them in distributed settings, therefore, has a number of important applications. We support our claims through empirical results. We also generalize our algorithm to tractable classes of tree convex constraints
A Coordinate-Descent Algorithm for Tracking Solutions in Time-Varying Optimal Power Flows
Consider a polynomial optimisation problem, whose instances vary continuously
over time. We propose to use a coordinate-descent algorithm for solving such
time-varying optimisation problems. In particular, we focus on relaxations of
transmission-constrained problems in power systems.
On the example of the alternating-current optimal power flows (ACOPF), we
bound the difference between the current approximate optimal cost generated by
our algorithm and the optimal cost for a relaxation using the most recent data
from above by a function of the properties of the instance and the rate of
change to the instance over time. We also bound the number of floating-point
operations that need to be performed between two updates in order to guarantee
the error is bounded from above by a given constant
Stable Camera Motion Estimation Using Convex Programming
We study the inverse problem of estimating n locations (up to
global scale, translation and negation) in from noisy measurements of a
subset of the (unsigned) pairwise lines that connect them, that is, from noisy
measurements of for some pairs (i,j) (where the
signs are unknown). This problem is at the core of the structure from motion
(SfM) problem in computer vision, where the 's represent camera locations
in . The noiseless version of the problem, with exact line measurements,
has been considered previously under the general title of parallel rigidity
theory, mainly in order to characterize the conditions for unique realization
of locations. For noisy pairwise line measurements, current methods tend to
produce spurious solutions that are clustered around a few locations. This
sensitivity of the location estimates is a well-known problem in SfM,
especially for large, irregular collections of images.
In this paper we introduce a semidefinite programming (SDP) formulation,
specially tailored to overcome the clustering phenomenon. We further identify
the implications of parallel rigidity theory for the location estimation
problem to be well-posed, and prove exact (in the noiseless case) and stable
location recovery results. We also formulate an alternating direction method to
solve the resulting semidefinite program, and provide a distributed version of
our formulation for large numbers of locations. Specifically for the camera
location estimation problem, we formulate a pairwise line estimation method
based on robust camera orientation and subspace estimation. Lastly, we
demonstrate the utility of our algorithm through experiments on real images.Comment: 40 pages, 12 figures, 6 tables; notation and some unclear parts
updated, some typos correcte
Let's Make Block Coordinate Descent Go Fast: Faster Greedy Rules, Message-Passing, Active-Set Complexity, and Superlinear Convergence
Block coordinate descent (BCD) methods are widely-used for large-scale
numerical optimization because of their cheap iteration costs, low memory
requirements, amenability to parallelization, and ability to exploit problem
structure. Three main algorithmic choices influence the performance of BCD
methods: the block partitioning strategy, the block selection rule, and the
block update rule. In this paper we explore all three of these building blocks
and propose variations for each that can lead to significantly faster BCD
methods. We (i) propose new greedy block-selection strategies that guarantee
more progress per iteration than the Gauss-Southwell rule; (ii) explore
practical issues like how to implement the new rules when using "variable"
blocks; (iii) explore the use of message-passing to compute matrix or Newton
updates efficiently on huge blocks for problems with a sparse dependency
between variables; and (iv) consider optimal active manifold identification,
which leads to bounds on the "active set complexity" of BCD methods and leads
to superlinear convergence for certain problems with sparse solutions (and in
some cases finite termination at an optimal solution). We support all of our
findings with numerical results for the classic machine learning problems of
least squares, logistic regression, multi-class logistic regression, label
propagation, and L1-regularization
- …