11,579 research outputs found
MM Algorithms for Geometric and Signomial Programming
This paper derives new algorithms for signomial programming, a generalization
of geometric programming. The algorithms are based on a generic principle for
optimization called the MM algorithm. In this setting, one can apply the
geometric-arithmetic mean inequality and a supporting hyperplane inequality to
create a surrogate function with parameters separated. Thus, unconstrained
signomial programming reduces to a sequence of one-dimensional minimization
problems. Simple examples demonstrate that the MM algorithm derived can
converge to a boundary point or to one point of a continuum of minimum points.
Conditions under which the minimum point is unique or occurs in the interior of
parameter space are proved for geometric programming. Convergence to an
interior point occurs at a linear rate. Finally, the MM framework easily
accommodates equality and inequality constraints of signomial type. For the
most important special case, constrained quadratic programming, the MM
algorithm involves very simple updates.Comment: 16 pages, 1 figur
Optimization Methods for Inverse Problems
Optimization plays an important role in solving many inverse problems.
Indeed, the task of inversion often either involves or is fully cast as a
solution of an optimization problem. In this light, the mere non-linear,
non-convex, and large-scale nature of many of these inversions gives rise to
some very challenging optimization problems. The inverse problem community has
long been developing various techniques for solving such optimization tasks.
However, other, seemingly disjoint communities, such as that of machine
learning, have developed, almost in parallel, interesting alternative methods
which might have stayed under the radar of the inverse problem community. In
this survey, we aim to change that. In doing so, we first discuss current
state-of-the-art optimization methods widely used in inverse problems. We then
survey recent related advances in addressing similar challenges in problems
faced by the machine learning community, and discuss their potential advantages
for solving inverse problems. By highlighting the similarities among the
optimization challenges faced by the inverse problem and the machine learning
communities, we hope that this survey can serve as a bridge in bringing
together these two communities and encourage cross fertilization of ideas.Comment: 13 page
Multiplicative Noise Removal Using Variable Splitting and Constrained Optimization
Multiplicative noise (also known as speckle noise) models are central to the
study of coherent imaging systems, such as synthetic aperture radar and sonar,
and ultrasound and laser imaging. These models introduce two additional layers
of difficulties with respect to the standard Gaussian additive noise scenario:
(1) the noise is multiplied by (rather than added to) the original image; (2)
the noise is not Gaussian, with Rayleigh and Gamma being commonly used
densities. These two features of multiplicative noise models preclude the
direct application of most state-of-the-art algorithms, which are designed for
solving unconstrained optimization problems where the objective has two terms:
a quadratic data term (log-likelihood), reflecting the additive and Gaussian
nature of the noise, plus a convex (possibly nonsmooth) regularizer (e.g., a
total variation or wavelet-based regularizer/prior). In this paper, we address
these difficulties by: (1) converting the multiplicative model into an additive
one by taking logarithms, as proposed by some other authors; (2) using variable
splitting to obtain an equivalent constrained problem; and (3) dealing with
this optimization problem using the augmented Lagrangian framework. A set of
experiments shows that the proposed method, which we name MIDAL (multiplicative
image denoising by augmented Lagrangian), yields state-of-the-art results both
in terms of speed and denoising performance.Comment: 11 pages, 7 figures, 2 tables. To appear in the IEEE Transactions on
Image Processing
Forward-backward truncated Newton methods for convex composite optimization
This paper proposes two proximal Newton-CG methods for convex nonsmooth
optimization problems in composite form. The algorithms are based on a a
reformulation of the original nonsmooth problem as the unconstrained
minimization of a continuously differentiable function, namely the
forward-backward envelope (FBE). The first algorithm is based on a standard
line search strategy, whereas the second one combines the global efficiency
estimates of the corresponding first-order methods, while achieving fast
asymptotic convergence rates. Furthermore, they are computationally attractive
since each Newton iteration requires the approximate solution of a linear
system of usually small dimension
- …