127 research outputs found

    Efficient First Order Methods for Linear Composite Regularizers

    Get PDF
    A wide class of regularization problems in machine learning and statistics employ a regularization term which is obtained by composing a simple convex function \omega with a linear transformation. This setting includes Group Lasso methods, the Fused Lasso and other total variation methods, multi-task learning methods and many more. In this paper, we present a general approach for computing the proximity operator of this class of regularizers, under the assumption that the proximity operator of the function \omega is known in advance. Our approach builds on a recent line of research on optimal first order optimization methods and uses fixed point iterations for numerically computing the proximity operator. It is more general than current approaches and, as we show with numerical simulations, computationally more efficient than available first order methods which do not achieve the optimal rate. In particular, our method outperforms state of the art O(1/T) methods for overlapping Group Lasso and matches optimal O(1/T^2) methods for the Fused Lasso and tree structured Group Lasso.Comment: 19 pages, 8 figure

    Efficient First Order Methods for Linear Composite Regularizers

    Get PDF
    A wide class of regularization problems in machine learning and statistics employ a regularization term which is obtained by composing a simple convex function omega with a linear transformation. This setting includes Group Lasso methods, the Fused Lasso and other total variation methods, multi-task learning methods and many more. In this paper, we present a general approach for computing the proximity operator of this class of regularizers, under the assumption that the proximity operator of the function \omega is known in advance. Our approach builds on a recent line of research on optimal first order optimization methods and uses fixed point iterations for numerically computing the proximity operator. It is more general than current approaches and, as we show with numerical simulations, computationally more efficient than available first order methods which do not achieve the optimal rate. In particular, our method outperforms state of the art O(1/T) methods for overlapping Group Lasso and matches optimal O(1/T2) methods for the Fused Lasso and tree structured Group Lasso

    Superiorization and Perturbation Resilience of Algorithms: A Continuously Updated Bibliography

    Full text link
    This document presents a, (mostly) chronologically ordered, bibliography of scientific publications on the superiorization methodology and perturbation resilience of algorithms which is compiled and continuously updated by us at: http://math.haifa.ac.il/yair/bib-superiorization-censor.html. Since the beginings of this topic we try to trace the work that has been published about it since its inception. To the best of our knowledge this bibliography represents all available publications on this topic to date, and while the URL is continuously updated we will revise this document and bring it up to date on arXiv approximately once a year. Abstracts of the cited works, and some links and downloadable files of preprints or reprints are available on the above mentioned Internet page. If you know of a related scientific work in any form that should be included here kindly write to me on: [email protected] with full bibliographic details, a DOI if available, and a PDF copy of the work if possible. The Internet page was initiated on March 7, 2015, and has been last updated on March 12, 2020.Comment: Original report: June 13, 2015 contained 41 items. First revision: March 9, 2017 contained 64 items. Second revision: March 8, 2018 contained 76 items. Third revision: March 11, 2019 contains 90 items. Fourth revision: March 16, 2020 contains 112 item

    A Neural-Network-Based Convex Regularizer for Image Reconstruction

    Full text link
    The emergence of deep-learning-based methods for solving inverse problems has enabled a significant increase in reconstruction quality. Unfortunately, these new methods often lack reliability and explainability, and there is a growing interest to address these shortcomings while retaining the performance. In this work, this problem is tackled by revisiting regularizers that are the sum of convex-ridge functions. The gradient of such regularizers is parametrized by a neural network that has a single hidden layer with increasing and learnable activation functions. This neural network is trained within a few minutes as a multi-step Gaussian denoiser. The numerical experiments for denoising, CT, and MRI reconstruction show improvements over methods that offer similar reliability guarantees

    Strong convergence of inertial extragradient algorithms for solving variational inequalities and fixed point problems

    Full text link
    The paper investigates two inertial extragradient algorithms for seeking a common solution to a variational inequality problem involving a monotone and Lipschitz continuous mapping and a fixed point problem with a demicontractive mapping in real Hilbert spaces. Our algorithms only need to calculate the projection on the feasible set once in each iteration. Moreover, they can work well without the prior information of the Lipschitz constant of the cost operator and do not contain any line search process. The strong convergence of the algorithms is established under suitable conditions. Some experiments are presented to illustrate the numerical efficiency of the suggested algorithms and compare them with some existing ones.Comment: 25 pages, 12 figure

    Superiorization: An optimization heuristic for medical physics

    Full text link
    Purpose: To describe and mathematically validate the superiorization methodology, which is a recently-developed heuristic approach to optimization, and to discuss its applicability to medical physics problem formulations that specify the desired solution (of physically given or otherwise obtained constraints) by an optimization criterion. Methods: The underlying idea is that many iterative algorithms for finding such a solution are perturbation resilient in the sense that, even if certain kinds of changes are made at the end of each iterative step, the algorithm still produces a constraints-compatible solution. This property is exploited by using permitted changes to steer the algorithm to a solution that is not only constraints-compatible, but is also desirable according to a specified optimization criterion. The approach is very general, it is applicable to many iterative procedures and optimization criteria used in medical physics. Results: The main practical contribution is a procedure for automatically producing from any given iterative algorithm its superiorized version, which will supply solutions that are superior according to a given optimization criterion. It is shown that if the original iterative algorithm satisfies certain mathematical conditions, then the output of its superiorized version is guaranteed to be as constraints-compatible as the output of the original algorithm, but it is superior to the latter according to the optimization criterion. This intuitive description is made precise in the paper and the stated claims are rigorously proved. Superiorization is illustrated on simulated computerized tomography data of a head cross-section and, in spite of its generality, superiorization is shown to be competitive to an optimization algorithm that is specifically designed to minimize total variation.Comment: Accepted for publication in: Medical Physic

    Exterior-point Optimization for Nonconvex Learning

    Full text link
    In this paper we present the nonconvex exterior-point optimization solver (NExOS) -- a novel first-order algorithm tailored to constrained nonconvex learning problems. We consider the problem of minimizing a convex function over nonconvex constraints, where the projection onto the constraint set is single-valued around local minima. A wide range of nonconvex learning problems have this structure including (but not limited to) sparse and low-rank optimization problems. By exploiting the underlying geometry of the constraint set, NExOS finds a locally optimal point by solving a sequence of penalized problems with strictly decreasing penalty parameters. NExOS solves each penalized problem by applying a first-order algorithm, which converges linearly to a local minimum of the corresponding penalized formulation under regularity conditions. Furthermore, the local minima of the penalized problems converge to a local minimum of the original problem as the penalty parameter goes to zero. We implement NExOS in the open-source Julia package NExOS.jl, which has been extensively tested on many instances from a wide variety of learning problems. We demonstrate that our algorithm, in spite of being general purpose, outperforms specialized methods on several examples of well-known nonconvex learning problems involving sparse and low-rank optimization. For sparse regression problems, NExOS finds locally optimal solutions which dominate glmnet in terms of support recovery, yet its training loss is smaller by an order of magnitude. For low-rank optimization with real-world data, NExOS recovers solutions with 3 fold training loss reduction, but with a proportion of explained variance that is 2 times better compared to the nuclear norm heuristic.Comment: 40 pages, 6 figure
    • …
    corecore