1,562 research outputs found

    A smoothing Newton method for minimizing a sum of Euclidean norms

    Get PDF
    2000-2001 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    An interior-point method for the single-facility location problem with mixed norms using a conic formulation

    Get PDF
    We consider the single-facility location problem with mixed norms, i.e. the problem of minimizing the sum of the distances from a point to a set of fixed points in R, where each distance can be measured according to a different p-norm.We show how this problem can be expressed into a structured conic format by decomposing the nonlinear components of the objective into a series of constraints involving three-dimensional cones. Using the availability of a self-concordant barrier for these cones, we present a polynomial-time algorithm (a long-step path-following interior-point scheme) to solve the problem up to a given accuracy. Finally, we report computational results for this algorithm and compare with standard nonlinear optimization solvers applied to this problem.nonsymmetric conic optimization, conic reformulation, convex optimization, sum of norm minimization, single-facility location problems, interior-point methods

    Majorization algorithms for inspecting circles, ellipses, squares, rectangles, and rhombi

    Get PDF
    In several disciplines, as diverse as shape analysis, locationtheory, quality control, archaeology, and psychometrics, it can beof interest to fit a circle through a set of points. We use theresult that it suffices to locate a center for which the varianceof the distances from the center to a set of given points isminimal. In this paper, we propose a new algorithm based oniterative majorization to locate the center. This algorithm isguaranteed to yield a series nonincreasing variances until astationary point is obtained. In all practical cases, thestationary point turns out to be a local minimum. Numericalexperiments show that the majorizing algorithm is stable and fast.In addition, we extend the method to fit other shapes, such as asquare, an ellipse, a rectangle, and a rhombus by making use ofthe class of lpl_p distances and dimension weighting. In addition,we allow for rotations for shapes that might be rotated in theplane. We illustrate how this extended algorithm can be used as atool for shape recognition.iterative majorization;location;optimization;shape analysis

    Second-order Shape Optimization for Geometric Inverse Problems in Vision

    Full text link
    We develop a method for optimization in shape spaces, i.e., sets of surfaces modulo re-parametrization. Unlike previously proposed gradient flows, we achieve superlinear convergence rates through a subtle approximation of the shape Hessian, which is generally hard to compute and suffers from a series of degeneracies. Our analysis highlights the role of mean curvature motion in comparison with first-order schemes: instead of surface area, our approach penalizes deformation, either by its Dirichlet energy or total variation. Latter regularizer sparks the development of an alternating direction method of multipliers on triangular meshes. Therein, a conjugate-gradients solver enables us to bypass formation of the Gaussian normal equations appearing in the course of the overall optimization. We combine all of the aforementioned ideas in a versatile geometric variation-regularized Levenberg-Marquardt-type method applicable to a variety of shape functionals, depending on intrinsic properties of the surface such as normal field and curvature as well as its embedding into space. Promising experimental results are reported

    Total variation regularization of multi-material topology optimization

    Get PDF
    This work is concerned with the determination of the diffusion coefficient from distributed data of the state. This problem is related to homogenization theory on the one hand and to regularization theory on the other hand. An approach is proposed which involves total variation regularization combined with a suitably chosen cost functional that promotes the diffusion coefficient assuming prespecified values at each point of the domain. The main difficulty lies in the delicate functional-analytic structure of the resulting nondifferentiable optimization problem with pointwise constraints for functions of bounded variation, which makes the derivation of useful pointwise optimality conditions challenging. To cope with this difficulty, a novel reparametrization technique is introduced. Numerical examples using a regularized semismooth Newton method illustrate the structure of the obtained diffusion coefficient.

    Self-concordant Smoothing for Convex Composite Optimization

    Full text link
    We introduce the notion of self-concordant smoothing for minimizing the sum of two convex functions: the first is smooth and the second may be nonsmooth. Our framework results naturally from the smoothing approximation technique referred to as partial smoothing in which only a part of the nonsmooth function is smoothed. The key highlight of our approach is in a natural property of the resulting problem's structure which provides us with a variable-metric selection method and a step-length selection rule particularly suitable for proximal Newton-type algorithms. In addition, we efficiently handle specific structures promoted by the nonsmooth function, such as 1\ell_1-regularization and group-lasso penalties. We prove local quadratic convergence rates for two resulting algorithms: Prox-N-SCORE, a proximal Newton algorithm and Prox-GGN-SCORE, a proximal generalized Gauss-Newton (GGN) algorithm. The Prox-GGN-SCORE algorithm highlights an important approximation procedure which helps to significantly reduce most of the computational overhead associated with the inverse Hessian. This approximation is essentially useful for overparameterized machine learning models and in the mini-batch settings. Numerical examples on both synthetic and real datasets demonstrate the efficiency of our approach and its superiority over existing approaches.Comment: 37 pages, 7 figures, 3 table

    Regularized Optimal Transport and the Rot Mover's Distance

    Full text link
    This paper presents a unified framework for smooth convex regularization of discrete optimal transport problems. In this context, the regularized optimal transport turns out to be equivalent to a matrix nearness problem with respect to Bregman divergences. Our framework thus naturally generalizes a previously proposed regularization based on the Boltzmann-Shannon entropy related to the Kullback-Leibler divergence, and solved with the Sinkhorn-Knopp algorithm. We call the regularized optimal transport distance the rot mover's distance in reference to the classical earth mover's distance. We develop two generic schemes that we respectively call the alternate scaling algorithm and the non-negative alternate scaling algorithm, to compute efficiently the regularized optimal plans depending on whether the domain of the regularizer lies within the non-negative orthant or not. These schemes are based on Dykstra's algorithm with alternate Bregman projections, and further exploit the Newton-Raphson method when applied to separable divergences. We enhance the separable case with a sparse extension to deal with high data dimensions. We also instantiate our proposed framework and discuss the inherent specificities for well-known regularizers and statistical divergences in the machine learning and information geometry communities. Finally, we demonstrate the merits of our methods with experiments using synthetic data to illustrate the effect of different regularizers and penalties on the solutions, as well as real-world data for a pattern recognition application to audio scene classification
    corecore