8,998 research outputs found
A path following algorithm for the graph matching problem
We propose a convex-concave programming approach for the labeled weighted
graph matching problem. The convex-concave programming formulation is obtained
by rewriting the weighted graph matching problem as a least-square problem on
the set of permutation matrices and relaxing it to two different optimization
problems: a quadratic convex and a quadratic concave optimization problem on
the set of doubly stochastic matrices. The concave relaxation has the same
global minimum as the initial graph matching problem, but the search for its
global minimum is also a hard combinatorial problem. We therefore construct an
approximation of the concave problem solution by following a solution path of a
convex-concave problem obtained by linear interpolation of the convex and
concave formulations, starting from the convex relaxation. This method allows
to easily integrate the information on graph label similarities into the
optimization problem, and therefore to perform labeled weighted graph matching.
The algorithm is compared with some of the best performing graph matching
methods on four datasets: simulated graphs, QAPLib, retina vessel images and
handwritten chinese characters. In all cases, the results are competitive with
the state-of-the-art.Comment: 23 pages, 13 figures,typo correction, new results in sections 4,5,
The MM Alternative to EM
The EM algorithm is a special case of a more general algorithm called the MM
algorithm. Specific MM algorithms often have nothing to do with missing data.
The first M step of an MM algorithm creates a surrogate function that is
optimized in the second M step. In minimization, MM stands for
majorize--minimize; in maximization, it stands for minorize--maximize. This
two-step process always drives the objective function in the right direction.
Construction of MM algorithms relies on recognizing and manipulating
inequalities rather than calculating conditional expectations. This survey
walks the reader through the construction of several specific MM algorithms.
The potential of the MM algorithm in solving high-dimensional optimization and
estimation problems is its most attractive feature. Our applications to random
graph models, discriminant analysis and image restoration showcase this
ability.Comment: Published in at http://dx.doi.org/10.1214/08-STS264 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Solving Variational Inequalities with Monotone Operators on Domains Given by Linear Minimization Oracles
The standard algorithms for solving large-scale convex-concave saddle point
problems, or, more generally, variational inequalities with monotone operators,
are proximal type algorithms which at every iteration need to compute a
prox-mapping, that is, to minimize over problem's domain the sum of a
linear form and the specific convex distance-generating function underlying the
algorithms in question. Relative computational simplicity of prox-mappings,
which is the standard requirement when implementing proximal algorithms,
clearly implies the possibility to equip with a relatively computationally
cheap Linear Minimization Oracle (LMO) able to minimize over linear forms.
There are, however, important situations where a cheap LMO indeed is available,
but where no proximal setup with easy-to-compute prox-mappings is known. This
fact motivates our goal in this paper, which is to develop techniques for
solving variational inequalities with monotone operators on domains given by
Linear Minimization Oracles. The techniques we develope can be viewed as a
substantial extension of the proposed in [5] method of nonsmooth convex
minimization over an LMO-represented domain
Inference for Generalized Linear Models via Alternating Directions and Bethe Free Energy Minimization
Generalized Linear Models (GLMs), where a random vector is
observed through a noisy, possibly nonlinear, function of a linear transform
arise in a range of applications in nonlinear
filtering and regression. Approximate Message Passing (AMP) methods, based on
loopy belief propagation, are a promising class of approaches for approximate
inference in these models. AMP methods are computationally simple, general, and
admit precise analyses with testable conditions for optimality for large i.i.d.
transforms . However, the algorithms can easily diverge for general
. This paper presents a convergent approach to the generalized AMP
(GAMP) algorithm based on direct minimization of a large-system limit
approximation of the Bethe Free Energy (LSL-BFE). The proposed method uses a
double-loop procedure, where the outer loop successively linearizes the LSL-BFE
and the inner loop minimizes the linearized LSL-BFE using the Alternating
Direction Method of Multipliers (ADMM). The proposed method, called ADMM-GAMP,
is similar in structure to the original GAMP method, but with an additional
least-squares minimization. It is shown that for strictly convex, smooth
penalties, ADMM-GAMP is guaranteed to converge to a local minima of the
LSL-BFE, thus providing a convergent alternative to GAMP that is stable under
arbitrary transforms. Simulations are also presented that demonstrate the
robustness of the method for non-convex penalties as well
Polytope of Correct (Linear Programming) Decoding and Low-Weight Pseudo-Codewords
We analyze Linear Programming (LP) decoding of graphical binary codes
operating over soft-output, symmetric and log-concave channels. We show that
the error-surface, separating domain of the correct decoding from domain of the
erroneous decoding, is a polytope. We formulate the problem of finding the
lowest-weight pseudo-codeword as a non-convex optimization (maximization of a
convex function) over a polytope, with the cost function defined by the channel
and the polytope defined by the structure of the code. This formulation
suggests new provably convergent heuristics for finding the lowest weight
pseudo-codewords improving in quality upon previously discussed. The algorithm
performance is tested on the example of the Tanner [155, 64, 20] code over the
Additive White Gaussian Noise (AWGN) channel.Comment: 6 pages, 2 figures, accepted for IEEE ISIT 201
- …