13 research outputs found
A D.C. Algorithm via Convex Analysis Approach for Solving a Location Problem Involving Sets
We study a location problem that involves a weighted sum of distances to
closed convex sets. As several of the weights might be negative, traditional
solution methods of convex optimization are not applicable. After obtaining
some existence theorems, we introduce a simple, but effective, algorithm for
solving the problem. Our method is based on the Pham Dinh - Le Thi algorithm
for d.c. programming and a generalized version of the Weiszfeld algorithm,
which works well for convex location problems
The Boosted DC Algorithm for Clustering with Constraints
This paper aims to investigate the effectiveness of the recently proposed
Boosted Difference of Convex functions Algorithm (BDCA) when applied to
clustering with constraints and set clustering with constraints problems. This
is the first paper to apply BDCA to a problem with nonlinear constraints. We
present the mathematical basis for the BDCA and Difference of Convex functions
Algorithm (DCA), along with a penalty method based on distance functions. We
then develop algorithms for solving these problems and computationally
implement them, with publicly available implementations. We compare old
examples and provide new experiments to test the algorithms. We find that the
BDCA method converges in fewer iterations than the corresponding DCA-based
method. In addition, BDCA yields faster CPU running-times in all tested
problems
Nonsmooth Algorithms and Nesterov\u27s Smoothing Technique for Generalized Fermat-Torricelli Problems
We present algorithms for solving a number of new models of facility location which generalize the classical Fermat--Torricelli problem. Our first approach involves using Nesterov\u27s smoothing technique and the minimization majorization principle to build smooth approximations that are convenient for applying smooth optimization schemes. Another approach uses subgradient-type algorithms to cope directly with the nondifferentiability of the cost functions. Convergence results of the algorithms are proved and numerical tests are presented to show the effectiveness of the proposed algorithms
Extensions to the Proximal Distance of Method of Constrained Optimization
The current paper studies the problem of minimizing a loss
subject to constraints of the form
, where is a closed set, convex or not,
and is a fusion matrix. Fusion constraints can capture
smoothness, sparsity, or more general constraint patterns. To tackle this
generic class of problems, we combine the Beltrami-Courant penalty method of
optimization with the proximal distance principle. The latter is driven by
minimization of penalized objectives
involving large tuning constants and the squared Euclidean distance of
from . The next iterate
of the corresponding proximal distance algorithm is
constructed from the current iterate by minimizing the
majorizing surrogate function
.
For fixed and convex and , we prove convergence,
provide convergence rates, and demonstrate linear convergence under stronger
assumptions. We also construct a steepest descent (SD) variant to avoid costly
linear system solves. To benchmark our algorithms, we adapt the alternating
direction method of multipliers (ADMM) and compare on extensive numerical tests
including problems in metric projection, convex regression, convex clustering,
total variation image denoising, and projection of a matrix to one that has a
good condition number. Our experiments demonstrate the superior speed and
acceptable accuracy of the steepest variant on high-dimensional problems. Julia
code to replicate all of our experiments can be found at
https://github.com/alanderos91/ProximalDistanceAlgorithms.jl.Comment: 35 pages (22 main text, 10 appendices, 3 references), 9 tables, 1
figur
The Majorization Minimization Principle and Some Applications in Convex Optimization
The majorization-minimization (MM) principle is an important tool for developing algorithms to solve optimization problems. This thesis is devoted to the study of the MM principle and applications to convex optimization. Based on some recent research articles, we present a survey on the principle that includes the geometric ideas behind the principle as well as its convergence results. Then we demonstrate some applications of the MM principle in solving the feasible point, closest point, support vector machine, and smallest intersecting ball problems, along with sample MATLAB code to implement each solution. The thesis also contains new results on effective algorithms for solving the smallest intersecting ball problem