1,893 research outputs found
Combinatorial Continuous Maximal Flows
Maximum flow (and minimum cut) algorithms have had a strong impact on
computer vision. In particular, graph cuts algorithms provide a mechanism for
the discrete optimization of an energy functional which has been used in a
variety of applications such as image segmentation, stereo, image stitching and
texture synthesis. Algorithms based on the classical formulation of max-flow
defined on a graph are known to exhibit metrication artefacts in the solution.
Therefore, a recent trend has been to instead employ a spatially continuous
maximum flow (or the dual min-cut problem) in these same applications to
produce solutions with no metrication errors. However, known fast continuous
max-flow algorithms have no stopping criteria or have not been proved to
converge. In this work, we revisit the continuous max-flow problem and show
that the analogous discrete formulation is different from the classical
max-flow problem. We then apply an appropriate combinatorial optimization
technique to this combinatorial continuous max-flow CCMF problem to find a
null-divergence solution that exhibits no metrication artefacts and may be
solved exactly by a fast, efficient algorithm with provable convergence.
Finally, by exhibiting the dual problem of our CCMF formulation, we clarify the
fact, already proved by Nozawa in the continuous setting, that the max-flow and
the total variation problems are not always equivalent.Comment: 26 page
Near-optimal perfectly matched layers for indefinite Helmholtz problems
A new construction of an absorbing boundary condition for indefinite
Helmholtz problems on unbounded domains is presented. This construction is
based on a near-best uniform rational interpolant of the inverse square root
function on the union of a negative and positive real interval, designed with
the help of a classical result by Zolotarev. Using Krein's interpretation of a
Stieltjes continued fraction, this interpolant can be converted into a
three-term finite difference discretization of a perfectly matched layer (PML)
which converges exponentially fast in the number of grid points. The
convergence rate is asymptotically optimal for both propagative and evanescent
wave modes. Several numerical experiments and illustrations are included.Comment: Accepted for publication in SIAM Review. To appear 201
Primal-dual interior-point algorithms for linear programs with many inequality constraints
Linear programs (LPs) are one of the most basic and important classes of constrained optimization problems, involving the optimization of linear objective functions over sets defined by linear equality and inequality constraints. LPs have applications to a broad range of problems in engineering and operations research, and often arise as subproblems for algorithms that solve more complex optimization problems.
``Unbalanced'' inequality-constrained LPs with many more inequality constraints than variables are an important subclass of LPs. Under a basic non-degeneracy assumption, only a small number of the constraints can be active at the solution--it is only this active set that is critical to the problem description. On the other hand, the additional constraints make the problem harder to solve. While modern ``interior-point'' algorithms have become recognized as some of the best methods for solving large-scale LPs, they may not be recommended for unbalanced problems, because their per-iteration work does not scale well with the number of constraints.
In this dissertation, we investigate "constraint-reduced'' interior-point algorithms designed to efficiently solve unbalanced LPs. At each iteration, these methods construct search directions based only on a small working set of constraints, while ignoring the rest. In this way, they significantly reduce their per-iteration work and, hopefully, their overall running time.
In particular, we focus on constraint-reduction methods for the highly efficient primal-dual interior-point (PDIP) algorithms. We propose and analyze a convergent constraint-reduced variant of Mehrotra's predictor-corrector PDIP algorithm, the algorithm implemented in virtually every interior-point software package for linear (and convex-conic) programming. We prove global and local quadratic convergence of this algorithm under a very general class of constraint selection rules and under minimal assumptions. We also propose and analyze two regularized constraint-reduced PDIP algorithms (with similar convergence properties) designed to deal directly with a type of degeneracy that constraint-reduced interior-point algorithms are often subject to. Prior schemes for dealing with this degeneracy could end up negating the benefit of constraint-reduction. Finally, we investigate the performance of our algorithms by applying them to several test and application problems, and show that our algorithms often outperform alternative approaches
Super-Linear Convergence of Dual Augmented-Lagrangian Algorithm for Sparsity Regularized Estimation
We analyze the convergence behaviour of a recently proposed algorithm for
regularized estimation called Dual Augmented Lagrangian (DAL). Our analysis is
based on a new interpretation of DAL as a proximal minimization algorithm. We
theoretically show under some conditions that DAL converges super-linearly in a
non-asymptotic and global sense. Due to a special modelling of sparse
estimation problems in the context of machine learning, the assumptions we make
are milder and more natural than those made in conventional analysis of
augmented Lagrangian algorithms. In addition, the new interpretation enables us
to generalize DAL to wide varieties of sparse estimation problems. We
experimentally confirm our analysis in a large scale -regularized
logistic regression problem and extensively compare the efficiency of DAL
algorithm to previously proposed algorithms on both synthetic and benchmark
datasets.Comment: 51 pages, 9 figure
An Inexact Primal-Dual Smoothing Framework for Large-Scale Non-Bilinear Saddle Point Problems
We develop an inexact primal-dual first-order smoothing framework to solve a
class of non-bilinear saddle point problems with primal strong convexity.
Compared with existing methods, our framework yields a significant improvement
over the primal oracle complexity, while it has competitive dual oracle
complexity. In addition, we consider the situation where the primal-dual
coupling term has a large number of component functions. To efficiently handle
this situation, we develop a randomized version of our smoothing framework,
which allows the primal and dual sub-problems in each iteration to be solved by
randomized algorithms inexactly in expectation. The convergence of this
framework is analyzed both in expectation and with high probability. In terms
of the primal and dual oracle complexities, this framework significantly
improves over its deterministic counterpart. As an important application, we
adapt both frameworks for solving convex optimization problems with many
functional constraints. To obtain an -optimal and
-feasible solution, both frameworks achieve the best-known oracle
complexities (in terms of their dependence on )
Interior point methods and simulated annealing for nonsymmetric conic optimization
This thesis explores four methods for convex optimization. The first two are an interior point method and a simulated annealing algorithm that share a theoretical foundation. This connection is due to the interior point method’s use of the so-called entropic barrier, whose derivatives can be approximated through sampling. Here, the sampling will be carried out with a technique known as hit-and-run. By carefully analyzing the properties of hit-and-run sampling, it is shown that both the interior point method and the simulated annealing algorithm can solve a convex optimization problem in the membership oracle setting. The number of oracle calls made by these methods is bounded by a polynomial in the input size. The third method is an analytic center cutting plane method that shows promising performance for copositive optimization. It outperforms the first two methods by a significant margin on the problem of separating a matrix from the completely positive cone. The final method is based on Mosek’s algorithm for nonsymmetric conic optimization. With their scaling matrix, search direction, and neighborhood, we define a method that converges to a near-optimal solution in polynomial time
- …