2,879 research outputs found
A new ADMM algorithm for the Euclidean median and its application to robust patch regression
The Euclidean Median (EM) of a set of points in an Euclidean space
is the point x minimizing the (weighted) sum of the Euclidean distances of x to
the points in . While there exits no closed-form expression for the EM,
it can nevertheless be computed using iterative methods such as the Wieszfeld
algorithm. The EM has classically been used as a robust estimator of centrality
for multivariate data. It was recently demonstrated that the EM can be used to
perform robust patch-based denoising of images by generalizing the popular
Non-Local Means algorithm. In this paper, we propose a novel algorithm for
computing the EM (and its box-constrained counterpart) using variable splitting
and the method of augmented Lagrangian. The attractive feature of this approach
is that the subproblems involved in the ADMM-based optimization of the
augmented Lagrangian can be resolved using simple closed-form projections. The
proposed ADMM solver is used for robust patch-based image denoising and is
shown to exhibit faster convergence compared to an existing solver.Comment: 5 pages, 3 figures, 1 table. To appear in Proc. IEEE International
Conference on Acoustics, Speech, and Signal Processing, April 19-24, 201
Semi-Global Exponential Stability of Augmented Primal-Dual Gradient Dynamics for Constrained Convex Optimization
Primal-dual gradient dynamics that find saddle points of a Lagrangian have
been widely employed for handling constrained optimization problems. Building
on existing methods, we extend the augmented primal-dual gradient dynamics
(Aug-PDGD) to incorporate general convex and nonlinear inequality constraints,
and we establish its semi-global exponential stability when the objective
function is strongly convex. We also provide an example of a strongly convex
quadratic program of which the Aug-PDGD fails to achieve global exponential
stability. Numerical simulation also suggests that the exponential convergence
rate could depend on the initial distance to the KKT point
Super-Linear Convergence of Dual Augmented-Lagrangian Algorithm for Sparsity Regularized Estimation
We analyze the convergence behaviour of a recently proposed algorithm for
regularized estimation called Dual Augmented Lagrangian (DAL). Our analysis is
based on a new interpretation of DAL as a proximal minimization algorithm. We
theoretically show under some conditions that DAL converges super-linearly in a
non-asymptotic and global sense. Due to a special modelling of sparse
estimation problems in the context of machine learning, the assumptions we make
are milder and more natural than those made in conventional analysis of
augmented Lagrangian algorithms. In addition, the new interpretation enables us
to generalize DAL to wide varieties of sparse estimation problems. We
experimentally confirm our analysis in a large scale -regularized
logistic regression problem and extensively compare the efficiency of DAL
algorithm to previously proposed algorithms on both synthetic and benchmark
datasets.Comment: 51 pages, 9 figure
Discrete-Continuous ADMM for Transductive Inference in Higher-Order MRFs
This paper introduces a novel algorithm for transductive inference in
higher-order MRFs, where the unary energies are parameterized by a variable
classifier. The considered task is posed as a joint optimization problem in the
continuous classifier parameters and the discrete label variables. In contrast
to prior approaches such as convex relaxations, we propose an advantageous
decoupling of the objective function into discrete and continuous subproblems
and a novel, efficient optimization method related to ADMM. This approach
preserves integrality of the discrete label variables and guarantees global
convergence to a critical point. We demonstrate the advantages of our approach
in several experiments including video object segmentation on the DAVIS data
set and interactive image segmentation
- …