952 research outputs found
Large-scale Binary Quadratic Optimization Using Semidefinite Relaxation and Applications
In computer vision, many problems such as image segmentation, pixel
labelling, and scene parsing can be formulated as binary quadratic programs
(BQPs). For submodular problems, cuts based methods can be employed to
efficiently solve large-scale problems. However, general nonsubmodular problems
are significantly more challenging to solve. Finding a solution when the
problem is of large size to be of practical interest, however, typically
requires relaxation. Two standard relaxation methods are widely used for
solving general BQPs--spectral methods and semidefinite programming (SDP), each
with their own advantages and disadvantages. Spectral relaxation is simple and
easy to implement, but its bound is loose. Semidefinite relaxation has a
tighter bound, but its computational complexity is high, especially for large
scale problems. In this work, we present a new SDP formulation for BQPs, with
two desirable properties. First, it has a similar relaxation bound to
conventional SDP formulations. Second, compared with conventional SDP methods,
the new SDP formulation leads to a significantly more efficient and scalable
dual optimization approach, which has the same degree of complexity as spectral
methods. We then propose two solvers, namely, quasi-Newton and smoothing Newton
methods, for the dual problem. Both of them are significantly more efficiently
than standard interior-point methods. In practice, the smoothing Newton solver
is faster than the quasi-Newton solver for dense or medium-sized problems,
while the quasi-Newton solver is preferable for large sparse/structured
problems. Our experiments on a few computer vision applications including
clustering, image segmentation, co-segmentation and registration show the
potential of our SDP formulation for solving large-scale BQPs.Comment: Fixed some typos. 18 pages. Accepted to IEEE Transactions on Pattern
Analysis and Machine Intelligenc
To be or not to be intrusive? The solution of parametric and stochastic equations - the "plain vanilla" Galerkin case
In parametric equations - stochastic equations are a special case - one may
want to approximate the solution such that it is easy to evaluate its
dependence of the parameters. Interpolation in the parameters is an obvious
possibility, in this context often labeled as a collocation method. In the
frequent situation where one has a "solver" for the equation for a given
parameter value - this may be a software component or a program - it is evident
that this can independently solve for the parameter values to be interpolated.
Such uncoupled methods which allow the use of the original solver are classed
as "non-intrusive". By extension, all other methods which produce some kind of
coupled system are often - in our view prematurely - classed as "intrusive". We
show for simple Galerkin formulations of the parametric problem - which
generally produce coupled systems - how one may compute the approximation in a
non-intusive way
A quadratically convergent proximal algorithm for nonnegative tensor decomposition
The decomposition of tensors into simple rank-1 terms is key in a variety of
applications in signal processing, data analysis and machine learning. While
this canonical polyadic decomposition (CPD) is unique under mild conditions,
including prior knowledge such as nonnegativity can facilitate interpretation
of the components. Inspired by the effectiveness and efficiency of Gauss-Newton
(GN) for unconstrained CPD, we derive a proximal, semismooth GN type algorithm
for nonnegative tensor factorization. If the algorithm converges to the global
optimum, we show that -quadratic convergence can be obtained in the exact
case. Global convergence is achieved via backtracking on the forward-backward
envelope function. The -quadratic convergence is verified experimentally,
and we illustrate that using the GN step significantly reduces number of
(expensive) gradient computations compared to proximal gradient descent
- …