1,335 research outputs found
Risk Minimization, Regret Minimization and Progressive Hedging Algorithms
This paper begins with a study on the dual representations of risk and regret
measures and their impact on modeling multistage decision making under
uncertainty. A relationship between risk envelopes and regret envelopes is
established by using the Lagrangian duality theory. Such a relationship opens a
door to a decomposition scheme, called progressive hedging, for solving
multistage risk minimization and regret minimization problems. In particular,
the classical progressive hedging algorithm is modified in order to handle a
new class of linkage constraints that arises from reformulations and other
applications of risk and regret minimization problems. Numerical results are
provided to show the efficiency of the progressive hedging algorithms.Comment: 21 pages, 2 figure
Lagrangian Relaxation for Mixed-Integer Linear Programming: Importance, Challenges, Recent Advancements, and Opportunities
Operations in areas of importance to society are frequently modeled as
Mixed-Integer Linear Programming (MILP) problems. While MILP problems suffer
from combinatorial complexity, Lagrangian Relaxation has been a beacon of hope
to resolve the associated difficulties through decomposition. Due to the
non-smooth nature of Lagrangian dual functions, the coordination aspect of the
method has posed serious challenges. This paper presents several significant
historical milestones (beginning with Polyak's pioneering work in 1967) toward
improving Lagrangian Relaxation coordination through improved optimization of
non-smooth functionals. Finally, this paper presents the most recent
developments in Lagrangian Relaxation for fast resolution of MILP problems. The
paper also briefly discusses the opportunities that Lagrangian Relaxation can
provide at this point in time
Optimization with Sparsity-Inducing Penalties
Sparse estimation methods are aimed at using or obtaining parsimonious
representations of data or models. They were first dedicated to linear variable
selection but numerous extensions have now emerged such as structured sparsity
or kernel selection. It turns out that many of the related estimation problems
can be cast as convex optimization problems by regularizing the empirical risk
with appropriate non-smooth norms. The goal of this paper is to present from a
general perspective optimization tools and techniques dedicated to such
sparsity-inducing penalties. We cover proximal methods, block-coordinate
descent, reweighted -penalized techniques, working-set and homotopy
methods, as well as non-convex formulations and extensions, and provide an
extensive set of experiments to compare various algorithms from a computational
point of view
Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems
Optimization methods are at the core of many problems in signal/image
processing, computer vision, and machine learning. For a long time, it has been
recognized that looking at the dual of an optimization problem may drastically
simplify its solution. Deriving efficient strategies which jointly brings into
play the primal and the dual problems is however a more recent idea which has
generated many important new contributions in the last years. These novel
developments are grounded on recent advances in convex analysis, discrete
optimization, parallel processing, and non-smooth optimization with emphasis on
sparsity issues. In this paper, we aim at presenting the principles of
primal-dual approaches, while giving an overview of numerical methods which
have been proposed in different contexts. We show the benefits which can be
drawn from primal-dual algorithms both for solving large-scale convex
optimization problems and discrete ones, and we provide various application
examples to illustrate their usefulness
- …