99 research outputs found

    New Douglas-Rachford algorithmic structures and their convergence analyses

    Full text link
    In this paper we study new algorithmic structures with Douglas- Rachford (DR) operators to solve convex feasibility problems. We propose to embed the basic two-set-DR algorithmic operator into the String-Averaging Projections (SAP) and into the Block-Iterative Pro- jection (BIP) algorithmic structures, thereby creating new DR algo- rithmic schemes that include the recently proposed cyclic Douglas- Rachford algorithm and the averaged DR algorithm as special cases. We further propose and investigate a new multiple-set-DR algorithmic operator. Convergence of all these algorithmic schemes is studied by using properties of strongly quasi-nonexpansive operators and firmly nonexpansive operators.Comment: SIAM Journal on Optimization, accepted for publicatio

    The Cyclic Douglas-Rachford Algorithm with r-sets-Douglas-Rachford Operators

    Get PDF
    The Douglas-Rachford (DR) algorithm is an iterative procedure that uses sequential reflections onto convex sets and which has become popular for convex feasibility problems. In this paper we propose a structural generalization that allows to use rr-sets-DR operators in a cyclic fashion. We prove convergence and present numerical illustrations of the potential advantage of such operators with r>2r>2 over the classical 22-sets-DR operators in a cyclic algorithm.Comment: Accepted for publication in Optimization Methods and Software (OMS) July 17, 201

    Principled Analyses and Design of First-Order Methods with Inexact Proximal Operators

    Full text link
    Proximal operations are among the most common primitives appearing in both practical and theoretical (or high-level) optimization methods. This basic operation typically consists in solving an intermediary (hopefully simpler) optimization problem. In this work, we survey notions of inaccuracies that can be used when solving those intermediary optimization problems. Then, we show that worst-case guarantees for algorithms relying on such inexact proximal operations can be systematically obtained through a generic procedure based on semidefinite programming. This methodology is primarily based on the approach introduced by Drori and Teboulle (Mathematical Programming, 2014) and on convex interpolation results, and allows producing non-improvable worst-case analyzes. In other words, for a given algorithm, the methodology generates both worst-case certificates (i.e., proofs) and problem instances on which those bounds are achieved. Relying on this methodology, we provide three new methods with conceptually simple proofs: (i) an optimized relatively inexact proximal point method, (ii) an extension of the hybrid proximal extragradient method of Monteiro and Svaiter (SIAM Journal on Optimization, 2013), and (iii) an inexact accelerated forward-backward splitting supporting backtracking line-search, and both (ii) and (iii) supporting possibly strongly convex objectives. Finally, we use the methodology for studying a recent inexact variant of the Douglas-Rachford splitting due to Eckstein and Yao (Mathematical Programming, 2018). We showcase and compare the different variants of the accelerated inexact forward-backward method on a factorization and a total variation problem.Comment: Minor modifications including acknowledgments and references. Code available at https://github.com/mathbarre/InexactProximalOperator

    Anderson‐accelerated polarization schemes for fast Fourier transform‐based computational homogenization

    Get PDF
    Classical solution methods in fast Fourier transform‐based computational micromechanics operate on, either, compatible strain fields or equilibrated stress fields. By contrast, polarization schemes are primal‐dual methods whose iterates are neither compatible nor equilibrated. Recently, it was demonstrated that polarization schemes may outperform the classical methods. Unfortunately, their computational power critically depends on a judicious choice of numerical parameters. In this work, we investigate the extension of polarization methods by Anderson acceleration and demonstrate that this combination leads to robust and fast general‐purpose solvers for computational micromechanics. We discuss the (theoretically) optimum parameter choice for polarization methods, describe how Anderson acceleration fits into the picture, and exhibit the characteristics of the newly designed methods for problems of industrial scale and interest

    Randomized Block-Coordinate Optimistic Gradient Algorithms for Root-Finding Problems

    Full text link
    In this paper, we develop two new randomized block-coordinate optimistic gradient algorithms to approximate a solution of nonlinear equations in large-scale settings, which are called root-finding problems. Our first algorithm is non-accelerated with constant stepsizes, and achieves O(1/k)\mathcal{O}(1/k) best-iterate convergence rate on E[∄Gxk∄2]\mathbb{E}[ \Vert Gx^k\Vert^2] when the underlying operator GG is Lipschitz continuous and satisfies a weak Minty solution condition, where E[⋅]\mathbb{E}[\cdot] is the expectation and kk is the iteration counter. Our second method is a new accelerated randomized block-coordinate optimistic gradient algorithm. We establish both O(1/k2)\mathcal{O}(1/k^2) and o(1/k2)o(1/k^2) last-iterate convergence rates on both E[∄Gxk∄2]\mathbb{E}[ \Vert Gx^k\Vert^2] and E[∄xk+1−xk∄2]\mathbb{E}[ \Vert x^{k+1} - x^{k}\Vert^2] for this algorithm under the co-coerciveness of GG. In addition, we prove that the iterate sequence {xk}\{x^k\} converges to a solution almost surely, and ∄Gxk∄2\Vert Gx^k\Vert^2 attains a o(1/k)o(1/k) almost sure convergence rate. Then, we apply our methods to a class of large-scale finite-sum inclusions, which covers prominent applications in machine learning, statistical learning, and network optimization, especially in federated learning. We obtain two new federated learning-type algorithms and their convergence rate guarantees for solving this problem class.Comment: 30 page

    Uncertainty quantification for radio interferometric imaging: II. MAP estimation

    Get PDF
    Uncertainty quantification is a critical missing component in radio interferometric imaging that will only become increasingly important as the big-data era of radio interferometry emerges. Statistical sampling approaches to perform Bayesian inference, like Markov Chain Monte Carlo (MCMC) sampling, can in principle recover the full posterior distribution of the image, from which uncertainties can then be quantified. However, for massive data sizes, like those anticipated from the Square Kilometre Array (SKA), it will be difficult if not impossible to apply any MCMC technique due to its inherent computational cost. We formulate Bayesian inference problems with sparsity-promoting priors (motivated by compressive sensing), for which we recover maximum a posteriori (MAP) point estimators of radio interferometric images by convex optimisation. Exploiting recent developments in the theory of probability concentration, we quantify uncertainties by post-processing the recovered MAP estimate. Three strategies to quantify uncertainties are developed: (i) highest posterior density credible regions; (ii) local credible intervals (cf. error bars) for individual pixels and superpixels; and (iii) hypothesis testing of image structure. These forms of uncertainty quantification provide rich information for analysing radio interferometric observations in a statistically robust manner. Our MAP-based methods are approximately 10510^5 times faster computationally than state-of-the-art MCMC methods and, in addition, support highly distributed and parallelised algorithmic structures. For the first time, our MAP-based techniques provide a means of quantifying uncertainties for radio interferometric imaging for realistic data volumes and practical use, and scale to the emerging big-data era of radio astronomy.Comment: 13 pages, 10 figures, see companion article in this arXiv listin
    • 

    corecore