883 research outputs found
Optimal scaling of the ADMM algorithm for distributed quadratic programming
This paper presents optimal scaling of the alternating directions method of
multipliers (ADMM) algorithm for a class of distributed quadratic programming
problems. The scaling corresponds to the ADMM step-size and relaxation
parameter, as well as the edge-weights of the underlying communication graph.
We optimize these parameters to yield the smallest convergence factor of the
algorithm. Explicit expressions are derived for the step-size and relaxation
parameter, as well as for the corresponding convergence factor. Numerical
simulations justify our results and highlight the benefits of optimally scaling
the ADMM algorithm.Comment: Submitted to the IEEE Transactions on Signal Processing. Prior work
was presented at the 52nd IEEE Conference on Decision and Control, 201
Convex Modeling of Interactions with Strong Heredity
We consider the task of fitting a regression model involving interactions
among a potentially large set of covariates, in which we wish to enforce strong
heredity. We propose FAMILY, a very general framework for this task. Our
proposal is a generalization of several existing methods, such as VANISH
[Radchenko and James, 2010], hierNet [Bien et al., 2013], the all-pairs lasso,
and the lasso using only main effects. It can be formulated as the solution to
a convex optimization problem, which we solve using an efficient alternating
directions method of multipliers (ADMM) algorithm. This algorithm has
guaranteed convergence to the global optimum, can be easily specialized to any
convex penalty function of interest, and allows for a straightforward extension
to the setting of generalized linear models. We derive an unbiased estimator of
the degrees of freedom of FAMILY, and explore its performance in a simulation
study and on an HIV sequence data set.Comment: Final version accepted for publication in JCG
Graded quantization for multiple description coding of compressive measurements
Compressed sensing (CS) is an emerging paradigm for acquisition of compressed
representations of a sparse signal. Its low complexity is appealing for
resource-constrained scenarios like sensor networks. However, such scenarios
are often coupled with unreliable communication channels and providing robust
transmission of the acquired data to a receiver is an issue. Multiple
description coding (MDC) effectively combats channel losses for systems without
feedback, thus raising the interest in developing MDC methods explicitly
designed for the CS framework, and exploiting its properties. We propose a
method called Graded Quantization (CS-GQ) that leverages the democratic
property of compressive measurements to effectively implement MDC, and we
provide methods to optimize its performance. A novel decoding algorithm based
on the alternating directions method of multipliers is derived to reconstruct
signals from a limited number of received descriptions. Simulations are
performed to assess the performance of CS-GQ against other methods in presence
of packet losses. The proposed method is successful at providing robust coding
of CS measurements and outperforms other schemes for the considered test
metrics
Undersampled Phase Retrieval with Outliers
We propose a general framework for reconstructing transform-sparse images
from undersampled (squared)-magnitude data corrupted with outliers. This
framework is implemented using a multi-layered approach, combining multiple
initializations (to address the nonconvexity of the phase retrieval problem),
repeated minimization of a convex majorizer (surrogate for a nonconvex
objective function), and iterative optimization using the alternating
directions method of multipliers. Exploiting the generality of this framework,
we investigate using a Laplace measurement noise model better adapted to
outliers present in the data than the conventional Gaussian noise model. Using
simulations, we explore the sensitivity of the method to both the
regularization and penalty parameters. We include 1D Monte Carlo and 2D image
reconstruction comparisons with alternative phase retrieval algorithms. The
results suggest the proposed method, with the Laplace noise model, both
increases the likelihood of correct support recovery and reduces the mean
squared error from measurements containing outliers. We also describe exciting
extensions made possible by the generality of the proposed framework, including
regularization using analysis-form sparsity priors that are incompatible with
many existing approaches.Comment: 11 pages, 9 figure
- …