12 research outputs found
A two-phase gradient method for quadratic programming problems with a single linear constraint and bounds on the variables
We propose a gradient-based method for quadratic programming problems with a
single linear constraint and bounds on the variables. Inspired by the GPCG
algorithm for bound-constrained convex quadratic programming [J.J. Mor\'e and
G. Toraldo, SIAM J. Optim. 1, 1991], our approach alternates between two phases
until convergence: an identification phase, which performs gradient projection
iterations until either a candidate active set is identified or no reasonable
progress is made, and an unconstrained minimization phase, which reduces the
objective function in a suitable space defined by the identification phase, by
applying either the conjugate gradient method or a recently proposed spectral
gradient method. However, the algorithm differs from GPCG not only because it
deals with a more general class of problems, but mainly for the way it stops
the minimization phase. This is based on a comparison between a measure of
optimality in the reduced space and a measure of bindingness of the variables
that are on the bounds, defined by extending the concept of proportioning,
which was proposed by some authors for box-constrained problems. If the
objective function is bounded, the algorithm converges to a stationary point
thanks to a suitable application of the gradient projection method in the
identification phase. For strictly convex problems, the algorithm converges to
the optimal solution in a finite number of steps even in case of degeneracy.
Extensive numerical experiments show the effectiveness of the proposed
approach.Comment: 30 pages, 17 figure
ACQUIRE: an inexact iteratively reweighted norm approach for TV-based Poisson image restoration
We propose a method, called ACQUIRE, for the solution of constrained
optimization problems modeling the restoration of images corrupted by Poisson
noise. The objective function is the sum of a generalized Kullback-Leibler
divergence term and a TV regularizer, subject to nonnegativity and possibly
other constraints, such as flux conservation. ACQUIRE is a line-search method
that considers a smoothed version of TV, based on a Huber-like function, and
computes the search directions by minimizing quadratic approximations of the
problem, built by exploiting some second-order information. A classical
second-order Taylor approximation is used for the Kullback-Leibler term and an
iteratively reweighted norm approach for the smoothed TV term. We prove that
the sequence generated by the method has a subsequence converging to a
minimizer of the smoothed problem and any limit point is a minimizer.
Furthermore, if the problem is strictly convex, the whole sequence is
convergent. We note that convergence is achieved without requiring the exact
minimization of the quadratic subproblems; low accuracy in this minimization
can be used in practice, as shown by numerical results. Experiments on
reference test problems show that our method is competitive with
well-established methods for TV-based Poisson image restoration, in terms of
both computational efficiency and image quality.Comment: 37 pages, 13 figure
Using gradient directions to get global convergence of Newton-type methods
The renewed interest in Steepest Descent (SD) methods following the work of
Barzilai and Borwein [IMA Journal of Numerical Analysis, 8 (1988)] has driven
us to consider a globalization strategy based on SD, which is applicable to any
line-search method. In particular, we combine Newton-type directions with
scaled SD steps to have suitable descent directions. Scaling the SD directions
with a suitable step length makes a significant difference with respect to
similar globalization approaches, in terms of both theoretical features and
computational behavior. We apply our strategy to Newton's method and the BFGS
method, with computational results that appear interesting compared with the
results of well-established globalization strategies devised ad hoc for those
methods.Comment: 22 pages, 11 Figure
AIRO 2016. 46th Annual Conference of the Italian Operational Research Society. Emerging Advances in Logistics Systems Trieste, September 6-9, 2016 - Abstracts Book
The AIRO 2016 book of abstract collects the contributions from the conference participants.
The AIRO 2016 Conference is a special occasion for the Italian Operations Research community, as AIRO annual conferences turn 46th edition in 2016. To reflect this special occasion, the Programme and Organizing Committee, chaired by Walter Ukovich, prepared a high quality Scientific Programme including the first initiative of AIRO Young, the new AIRO poster section that aims to promote the work of students, PhD students, and Postdocs with an interest in Operations Research.
The Scientific Programme of the Conference offers a broad spectrum of contributions covering the variety of OR topics and research areas with an emphasis on “Emerging Advances in Logistics Systems”.
The event aims at stimulating integration of existing methods and systems, fostering communication amongst different research groups, and laying the foundations for OR integrated research projects in the next decade.
Distinct thematic sections follow the AIRO 2016 days starting by initial presentation of the objectives and features of the Conference. In addition three invited internationally known speakers will present Plenary Lectures, by Gianni Di Pillo, Frédéric Semet e Stefan Nickel, gathering AIRO 2016 participants together to offer key presentations on the latest advances and developments in OR’s research
On the regularizing behavior of the SDA and SDC gradient methods in the solution of linear ill-posed problems
We analyze the regularization properties of two recently proposed gradient methods, SDA and SDC, applied to discrete linear inverse problems. By studying their filter factors, we show that the tendency of these methods to eliminate first the eigencomponents of the gradient corresponding to large singular values allows to reconstruct the most significant part of the solution, thus yielding a useful filtering effect. This behavior is confirmed by numerical experiments performed on some image restoration problems. Furthermore, the experiments show that, for severely ill-conditioned problems and high noise levels, the SDA and SDC methods can be competitive with the Conjugate Gradient (CG) method, since they are slightly slower than CG, but exhibit a better semiconvergence behavior