4,769 research outputs found

    Superiorization and Perturbation Resilience of Algorithms: A Continuously Updated Bibliography

    Full text link
    This document presents a, (mostly) chronologically ordered, bibliography of scientific publications on the superiorization methodology and perturbation resilience of algorithms which is compiled and continuously updated by us at: http://math.haifa.ac.il/yair/bib-superiorization-censor.html. Since the beginings of this topic we try to trace the work that has been published about it since its inception. To the best of our knowledge this bibliography represents all available publications on this topic to date, and while the URL is continuously updated we will revise this document and bring it up to date on arXiv approximately once a year. Abstracts of the cited works, and some links and downloadable files of preprints or reprints are available on the above mentioned Internet page. If you know of a related scientific work in any form that should be included here kindly write to me on: [email protected] with full bibliographic details, a DOI if available, and a PDF copy of the work if possible. The Internet page was initiated on March 7, 2015, and has been last updated on March 12, 2020.Comment: Original report: June 13, 2015 contained 41 items. First revision: March 9, 2017 contained 64 items. Second revision: March 8, 2018 contained 76 items. Third revision: March 11, 2019 contains 90 items. Fourth revision: March 16, 2020 contains 112 item

    Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization

    Get PDF
    Due to their simplicity and excellent performance, parallel asynchronous variants of stochastic gradient descent have become popular methods to solve a wide range of large-scale optimization problems on multi-core architectures. Yet, despite their practical success, support for nonsmooth objectives is still lacking, making them unsuitable for many problems of interest in machine learning, such as the Lasso, group Lasso or empirical risk minimization with convex constraints. In this work, we propose and analyze ProxASAGA, a fully asynchronous sparse method inspired by SAGA, a variance reduced incremental gradient algorithm. The proposed method is easy to implement and significantly outperforms the state of the art on several nonsmooth, large-scale problems. We prove that our method achieves a theoretical linear speedup with respect to the sequential version under assumptions on the sparsity of gradients and block-separability of the proximal term. Empirical benchmarks on a multi-core architecture illustrate practical speedups of up to 12x on a 20-core machine.Comment: Appears in Advances in Neural Information Processing Systems 30 (NIPS 2017), 28 page
    • …
    corecore