773 research outputs found
First order algorithms in variational image processing
Variational methods in imaging are nowadays developing towards a quite
universal and flexible tool, allowing for highly successful approaches on tasks
like denoising, deblurring, inpainting, segmentation, super-resolution,
disparity, and optical flow estimation. The overall structure of such
approaches is of the form ; where the functional is a data fidelity term also
depending on some input data and measuring the deviation of from such
and is a regularization functional. Moreover is a (often linear)
forward operator modeling the dependence of data on an underlying image, and
is a positive regularization parameter. While is often
smooth and (strictly) convex, the current practice almost exclusively uses
nonsmooth regularization functionals. The majority of successful techniques is
using nonsmooth and convex functionals like the total variation and
generalizations thereof or -norms of coefficients arising from scalar
products with some frame system. The efficient solution of such variational
problems in imaging demands for appropriate algorithms. Taking into account the
specific structure as a sum of two very different terms to be minimized,
splitting algorithms are a quite canonical choice. Consequently this field has
revived the interest in techniques like operator splittings or augmented
Lagrangians. Here we shall provide an overview of methods currently developed
and recent results as well as some computational studies providing a comparison
of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure
Assessing the reliability of general-purpose Inexact Restoration methods
Inexact Restoration methods have been proved to be effective to solve constrained optimization problems in which some structure of the feasible set induces a natural way of recovering feasibility from arbitrary infeasible points. Sometimes natural ways of dealing with minimization over tangent approximations of the feasible set are also employed. A recent paper Banihashemi and Kaya (2013)] suggests that the Inexact Restoration approach can be competitive with well-established nonlinear programming solvers when applied to certain control problems without any problem-oriented procedure for restoring feasibility. This result motivated us to revisit the idea of designing general-purpose Inexact Restoration methods, especially for large-scale problems. In this paper we introduce affordable algorithms of Inexact Restoration type for solving arbitrary nonlinear programming problems and we perform the first experiments that aim to assess their reliability. Initially, we define a purely local Inexact Restoration algorithm with quadratic convergence. Then, we modify the local algorithm in order to increase the chances of success of both the restoration and the optimization phase. This hybrid algorithm is intermediate between the local algorithm and a globally convergent one for which, under suitable assumptions, convergence to KKT points can be proved28
Economic inexact restoration for derivative-free expensive function minimization and applications
The Inexact Restoration approach has proved to be an adequate tool for
handling the problem of minimizing an expensive function within an arbitrary
feasible set by using different degrees of precision in the objective function.
The Inexact Restoration framework allows one to obtain suitable convergence and
complexity results for an approach that rationally combines low- and
high-precision evaluations. In the present research, it is recognized that many
problems with expensive objective functions are nonsmooth and, sometimes, even
discontinuous. Having this in mind, the Inexact Restoration approach is
extended to the nonsmooth or discontinuous case. Although optimization phases
that rely on smoothness cannot be used in this case, basic convergence and
complexity results are recovered. A derivative-free optimization phase is
defined and the subproblems that arise at this phase are solved using a
regularization approach that take advantage of different notions of
stationarity. The new methodology is applied to the problem of reproducing a
controlled experiment that mimics the failure of a dam
On the convergence of a linesearch based proximal-gradient method for nonconvex optimization
We consider a variable metric linesearch based proximal gradient method for
the minimization of the sum of a smooth, possibly nonconvex function plus a
convex, possibly nonsmooth term. We prove convergence of this iterative
algorithm to a critical point if the objective function satisfies the
Kurdyka-Lojasiewicz property at each point of its domain, under the assumption
that a limit point exists. The proposed method is applied to a wide collection
of image processing problems and our numerical tests show that our algorithm
results to be flexible, robust and competitive when compared to recently
proposed approaches able to address the optimization problems arising in the
considered applications
Large-scale Binary Quadratic Optimization Using Semidefinite Relaxation and Applications
In computer vision, many problems such as image segmentation, pixel
labelling, and scene parsing can be formulated as binary quadratic programs
(BQPs). For submodular problems, cuts based methods can be employed to
efficiently solve large-scale problems. However, general nonsubmodular problems
are significantly more challenging to solve. Finding a solution when the
problem is of large size to be of practical interest, however, typically
requires relaxation. Two standard relaxation methods are widely used for
solving general BQPs--spectral methods and semidefinite programming (SDP), each
with their own advantages and disadvantages. Spectral relaxation is simple and
easy to implement, but its bound is loose. Semidefinite relaxation has a
tighter bound, but its computational complexity is high, especially for large
scale problems. In this work, we present a new SDP formulation for BQPs, with
two desirable properties. First, it has a similar relaxation bound to
conventional SDP formulations. Second, compared with conventional SDP methods,
the new SDP formulation leads to a significantly more efficient and scalable
dual optimization approach, which has the same degree of complexity as spectral
methods. We then propose two solvers, namely, quasi-Newton and smoothing Newton
methods, for the dual problem. Both of them are significantly more efficiently
than standard interior-point methods. In practice, the smoothing Newton solver
is faster than the quasi-Newton solver for dense or medium-sized problems,
while the quasi-Newton solver is preferable for large sparse/structured
problems. Our experiments on a few computer vision applications including
clustering, image segmentation, co-segmentation and registration show the
potential of our SDP formulation for solving large-scale BQPs.Comment: Fixed some typos. 18 pages. Accepted to IEEE Transactions on Pattern
Analysis and Machine Intelligenc
- âŠ