5,757 research outputs found
Lagrangian Relaxation for MAP Estimation in Graphical Models
We develop a general framework for MAP estimation in discrete and Gaussian
graphical models using Lagrangian relaxation techniques. The key idea is to
reformulate an intractable estimation problem as one defined on a more
tractable graph, but subject to additional constraints. Relaxing these
constraints gives a tractable dual problem, one defined by a thin graph, which
is then optimized by an iterative procedure. When this iterative optimization
leads to a consistent estimate, one which also satisfies the constraints, then
it corresponds to an optimal MAP estimate of the original model. Otherwise
there is a ``duality gap'', and we obtain a bound on the optimal solution.
Thus, our approach combines convex optimization with dynamic programming
techniques applicable for thin graphs. The popular tree-reweighted max-product
(TRMP) method may be seen as solving a particular class of such relaxations,
where the intractable graph is relaxed to a set of spanning trees. We also
consider relaxations to a set of small induced subgraphs, thin subgraphs (e.g.
loops), and a connected tree obtained by ``unwinding'' cycles. In addition, we
propose a new class of multiscale relaxations that introduce ``summary''
variables. The potential benefits of such generalizations include: reducing or
eliminating the ``duality gap'' in hard problems, reducing the number or
Lagrange multipliers in the dual problem, and accelerating convergence of the
iterative optimization procedure.Comment: 10 pages, presented at 45th Allerton conference on communication,
control and computing, to appear in proceeding
Super-Linear Convergence of Dual Augmented-Lagrangian Algorithm for Sparsity Regularized Estimation
We analyze the convergence behaviour of a recently proposed algorithm for
regularized estimation called Dual Augmented Lagrangian (DAL). Our analysis is
based on a new interpretation of DAL as a proximal minimization algorithm. We
theoretically show under some conditions that DAL converges super-linearly in a
non-asymptotic and global sense. Due to a special modelling of sparse
estimation problems in the context of machine learning, the assumptions we make
are milder and more natural than those made in conventional analysis of
augmented Lagrangian algorithms. In addition, the new interpretation enables us
to generalize DAL to wide varieties of sparse estimation problems. We
experimentally confirm our analysis in a large scale -regularized
logistic regression problem and extensively compare the efficiency of DAL
algorithm to previously proposed algorithms on both synthetic and benchmark
datasets.Comment: 51 pages, 9 figure
On the local stability of semidefinite relaxations
We consider a parametric family of quadratically constrained quadratic
programs (QCQP) and their associated semidefinite programming (SDP)
relaxations. Given a nominal value of the parameter at which the SDP relaxation
is exact, we study conditions (and quantitative bounds) under which the
relaxation will continue to be exact as the parameter moves in a neighborhood
around the nominal value. Our framework captures a wide array of statistical
estimation problems including tensor principal component analysis, rotation
synchronization, orthogonal Procrustes, camera triangulation and resectioning,
essential matrix estimation, system identification, and approximate GCD. Our
results can also be used to analyze the stability of SOS relaxations of general
polynomial optimization problems.Comment: 23 pages, 3 figure
- …