513 research outputs found
An Asymmetric Proximal Decomposition Method for Convex Programming with Linearly Coupling Constraints
The problems studied are the separable variational inequalities with linearly coupling constraints. Some existing decomposition methods are very problem specific, and the computation load is quite costly. Combining the ideas of proximal point algorithm (PPA) and augmented Lagrangian method (ALM), we propose an asymmetric proximal decomposition method (AsPDM) to solve a wide variety separable problems. By adding an auxiliary quadratic term to the general Lagrangian function, our method can take advantage of the separable feature. We also present an inexact version of AsPDM to reduce the computation load of each iteration. In the computation process, the inexact version only uses the function values. Moreover, the inexact criterion and the step size can be implemented in parallel. The convergence of the proposed method is proved, and numerical experiments are employed to show the advantage of AsPDM
Shape Parameter Estimation
Performance of machine learning approaches depends strongly on the choice of
misfit penalty, and correct choice of penalty parameters, such as the threshold
of the Huber function. These parameters are typically chosen using expert
knowledge, cross-validation, or black-box optimization, which are time
consuming for large-scale applications. We present a principled, data-driven
approach to simultaneously learn the model pa- rameters and the misfit penalty
parameters. We discuss theoretical properties of these joint inference
problems, and develop algorithms for their solution. We show synthetic examples
of automatic parameter tuning for piecewise linear-quadratic (PLQ) penalties,
and use the approach to develop a self-tuning robust PCA formulation for
background separation.Comment: 20 pages, 10 figure
A Primal-Dual Algorithmic Framework for Constrained Convex Minimization
We present a primal-dual algorithmic framework to obtain approximate
solutions to a prototypical constrained convex optimization problem, and
rigorously characterize how common structural assumptions affect the numerical
efficiency. Our main analysis technique provides a fresh perspective on
Nesterov's excessive gap technique in a structured fashion and unifies it with
smoothing and primal-dual methods. For instance, through the choices of a dual
smoothing strategy and a center point, our framework subsumes decomposition
algorithms, augmented Lagrangian as well as the alternating direction
method-of-multipliers methods as its special cases, and provides optimal
convergence rates on the primal objective residual as well as the primal
feasibility gap of the iterates for all.Comment: This paper consists of 54 pages with 7 tables and 12 figure
An efficient symmetric primal-dual algorithmic framework for saddle point problems
In this paper, we propose a new primal-dual algorithmic framework for a class
of convex-concave saddle point problems frequently arising from image
processing and machine learning. Our algorithmic framework updates the primal
variable between the twice calculations of the dual variable, thereby appearing
a symmetric iterative scheme, which is accordingly called the {\bf s}ymmetric
{\bf p}r{\bf i}mal-{\bf d}ual {\bf a}lgorithm (SPIDA). It is noteworthy that
the subproblems of our SPIDA are equipped with Bregman proximal regularization
terms, which make SPIDA versatile in the sense that it enjoys an algorithmic
framework covering some existing algorithms such as the classical augmented
Lagrangian method (ALM), linearized ALM, and Jacobian splitting algorithms for
linearly constrained optimization problems. Besides, our algorithmic framework
allows us to derive some customized versions so that SPIDA works as efficiently
as possible for structured optimization problems. Theoretically, under some
mild conditions, we prove the global convergence of SPIDA and estimate the
linear convergence rate under a generalized error bound condition defined by
Bregman distance. Finally, a series of numerical experiments on the matrix
game, basis pursuit, robust principal component analysis, and image restoration
demonstrate that our SPIDA works well on synthetic and real-world datasets.Comment: 32 pages; 5 figure; 7 table
Non-Convex and Geometric Methods for Tomography and Label Learning
Data labeling is a fundamental problem of mathematical data analysis in which each data point is assigned exactly one single label (prototype) from a finite predefined set. In this thesis we study two challenging extensions, where either the input data cannot be observed directly or prototypes are not available beforehand.
The main application of the first setting is discrete tomography. We propose several non-convex variational as well as smooth geometric approaches to joint image label assignment and reconstruction from indirect measurements with known prototypes. In particular, we consider spatial regularization of assignments, based on the KL-divergence, which takes into account the smooth geometry of discrete probability distributions endowed with the Fisher-Rao (information) metric, i.e. the assignment manifold. Finally, the geometric point of view leads to a smooth flow evolving on a Riemannian submanifold including the tomographic projection constraints directly into the geometry of assignments. Furthermore we investigate corresponding implicit numerical schemes which amount to solving a sequence of convex problems.
Likewise, for the second setting, when the prototypes are absent, we introduce and study a smooth dynamical system for unsupervised data labeling which evolves by geometric integration on the assignment manifold. Rigorously abstracting from ``data-label'' to ``data-data'' decisions leads to interpretable low-rank data representations, which themselves are parameterized by label assignments. The resulting self-assignment flow simultaneously performs learning of latent prototypes in the very same framework while they are used for inference. Moreover, a single parameter, the scale of regularization in terms of spatial context, drives the entire process. By smooth geodesic interpolation between different normalizations of self-assignment matrices on the positive definite matrix manifold, a one-parameter family of self-assignment flows is defined. Accordingly, the proposed approach can be characterized from different viewpoints such as discrete optimal transport, normalized spectral cuts and combinatorial optimization by completely positive factorizations, each with additional built-in spatial regularization
First-order Convex Optimization Methods for Signal and Image Processing
In this thesis we investigate the use of first-order convex optimization methods applied to problems in signal and image processing. First we make a general introduction to convex optimization, first-order methods and their iteration com-plexity. Then we look at different techniques, which can be used with first-order methods such as smoothing, Lagrange multipliers and proximal gradient meth-ods. We continue by presenting different applications of convex optimization and notable convex formulations with an emphasis on inverse problems and sparse signal processing. We also describe the multiple-description problem. We finally present the contributions of the thesis. The remaining parts of the thesis consist of five research papers. The first paper addresses non-smooth first-order convex optimization and the trade-off between accuracy and smoothness of the approximating smooth function. The second and third papers concern discrete linear inverse problems and reliable numerical reconstruction software. The last two papers present a convex opti-mization formulation of the multiple-description problem and a method to solve it in the case of large-scale instances. i i
- …