6,034 research outputs found
A penalty method for PDE-constrained optimization in inverse problems
Many inverse and parameter estimation problems can be written as
PDE-constrained optimization problems. The goal, then, is to infer the
parameters, typically coefficients of the PDE, from partial measurements of the
solutions of the PDE for several right-hand-sides. Such PDE-constrained
problems can be solved by finding a stationary point of the Lagrangian, which
entails simultaneously updating the paramaters and the (adjoint) state
variables. For large-scale problems, such an all-at-once approach is not
feasible as it requires storing all the state variables. In this case one
usually resorts to a reduced approach where the constraints are explicitly
eliminated (at each iteration) by solving the PDEs. These two approaches, and
variations thereof, are the main workhorses for solving PDE-constrained
optimization problems arising from inverse problems. In this paper, we present
an alternative method that aims to combine the advantages of both approaches.
Our method is based on a quadratic penalty formulation of the constrained
optimization problem. By eliminating the state variable, we develop an
efficient algorithm that has roughly the same computational complexity as the
conventional reduced approach while exploiting a larger search space. Numerical
results show that this method indeed reduces some of the non-linearity of the
problem and is less sensitive the initial iterate
Supervised Dictionary Learning
It is now well established that sparse signal models are well suited to
restoration tasks and can effectively be learned from audio, image, and video
data. Recent research has been aimed at learning discriminative sparse models
instead of purely reconstructive ones. This paper proposes a new step in that
direction, with a novel sparse representation for signals belonging to
different classes in terms of a shared dictionary and multiple class-decision
functions. The linear variant of the proposed model admits a simple
probabilistic interpretation, while its most general variant admits an
interpretation in terms of kernels. An optimization framework for learning all
the components of the proposed model is presented, along with experimental
results on standard handwritten digit and texture classification tasks
A Fast Active Set Block Coordinate Descent Algorithm for -regularized least squares
The problem of finding sparse solutions to underdetermined systems of linear
equations arises in several applications (e.g. signal and image processing,
compressive sensing, statistical inference). A standard tool for dealing with
sparse recovery is the -regularized least-squares approach that has
been recently attracting the attention of many researchers. In this paper, we
describe an active set estimate (i.e. an estimate of the indices of the zero
variables in the optimal solution) for the considered problem that tries to
quickly identify as many active variables as possible at a given point, while
guaranteeing that some approximate optimality conditions are satisfied. A
relevant feature of the estimate is that it gives a significant reduction of
the objective function when setting to zero all those variables estimated
active. This enables to easily embed it into a given globally converging
algorithmic framework. In particular, we include our estimate into a block
coordinate descent algorithm for -regularized least squares, analyze
the convergence properties of this new active set method, and prove that its
basic version converges with linear rate. Finally, we report some numerical
results showing the effectiveness of the approach.Comment: 28 pages, 5 figure
Alternating Direction Methods for Latent Variable Gaussian Graphical Model Selection
Chandrasekaran, Parrilo and Willsky (2010) proposed a convex optimization
problem to characterize graphical model selection in the presence of unobserved
variables. This convex optimization problem aims to estimate an inverse
covariance matrix that can be decomposed into a sparse matrix minus a low-rank
matrix from sample data. Solving this convex optimization problem is very
challenging, especially for large problems. In this paper, we propose two
alternating direction methods for solving this problem. The first method is to
apply the classical alternating direction method of multipliers to solve the
problem as a consensus problem. The second method is a proximal gradient based
alternating direction method of multipliers. Our methods exploit and take
advantage of the special structure of the problem and thus can solve large
problems very efficiently. Global convergence result is established for the
proposed methods. Numerical results on both synthetic data and gene expression
data show that our methods usually solve problems with one million variables in
one to two minutes, and are usually five to thirty five times faster than a
state-of-the-art Newton-CG proximal point algorithm
- …