63 research outputs found

    Inner-outer Iterative Methods for Eigenvalue Problems - Convergence and Preconditioning

    Get PDF
    Many methods for computing eigenvalues of a large sparse matrix involve shift-invert transformations which require the solution of a shifted linear system at each step. This thesis deals with shift-invert iterative techniques for solving eigenvalue problems where the arising linear systems are solved inexactly using a second iterative technique. This approach leads to an inner-outer type algorithm. We provide convergence results for the outer iterative eigenvalue computation as well as techniques for efficient inner solves. In particular eigenvalue computations using inexact inverse iteration, the Jacobi-Davidson method without subspace expansion and the shift-invert Arnoldi method as a subspace method are investigated in detail. A general convergence result for inexact inverse iteration for the non-Hermitian generalised eigenvalue problem is given, using only minimal assumptions. This convergence result is obtained in two different ways; on the one hand, we use an equivalence result between inexact inverse iteration applied to the generalised eigenproblem and modified Newton's method; on the other hand, a splitting method is used which generalises the idea of orthogonal decomposition. Both approaches also include an analysis for the convergence theory of a version of inexact Jacobi-Davidson method, where equivalences between Newton's method, inverse iteration and the Jacobi-Davidson method are exploited. To improve the efficiency of the inner iterative solves we introduce a new tuning strategy which can be applied to any standard preconditioner. We give a detailed analysis on this new preconditioning idea and show how the number of iterations for the inner iterative method and hence the total number of iterations can be reduced significantly by the application of this tuning strategy. The analysis of the tuned preconditioner is carried out for both Hermitian and non-Hermitian eigenproblems. We show how the preconditioner can be implemented efficiently and illustrate its performance using various numerical examples. An equivalence result between the preconditioned simplified Jacobi-Davidson method and inexact inverse iteration with the tuned preconditioner is given. Finally, we discuss the shift-invert Arnoldi method both in the standard and restarted fashion. First, existing relaxation strategies for the outer iterative solves are extended to implicitly restarted Arnoldi's method. Second, we apply the idea of tuning the preconditioner to the inner iterative solve. As for inexact inverse iteration the tuned preconditioner for inexact Arnoldi's method is shown to provide significant savings in the number of inner solves. The theory in this thesis is supported by many numerical examples.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Conditional Gradient Methods

    Full text link
    The purpose of this survey is to serve both as a gentle introduction and a coherent overview of state-of-the-art Frank--Wolfe algorithms, also called conditional gradient algorithms, for function minimization. These algorithms are especially useful in convex optimization when linear optimization is cheaper than projections. The selection of the material has been guided by the principle of highlighting crucial ideas as well as presenting new approaches that we believe might become important in the future, with ample citations even of old works imperative in the development of newer methods. Yet, our selection is sometimes biased, and need not reflect consensus of the research community, and we have certainly missed recent important contributions. After all the research area of Frank--Wolfe is very active, making it a moving target. We apologize sincerely in advance for any such distortions and we fully acknowledge: We stand on the shoulder of giants.Comment: 238 pages with many figures. The FrankWolfe.jl Julia package (https://github.com/ZIB-IOL/FrankWolfe.jl) providces state-of-the-art implementations of many Frank--Wolfe method

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Convex Optimization: Algorithms and Complexity

    Full text link
    This monograph presents the main complexity theorems in convex optimization and their corresponding algorithms. Starting from the fundamental theory of black-box optimization, the material progresses towards recent advances in structural optimization and stochastic optimization. Our presentation of black-box optimization, strongly influenced by Nesterov's seminal book and Nemirovski's lecture notes, includes the analysis of cutting plane methods, as well as (accelerated) gradient descent schemes. We also pay special attention to non-Euclidean settings (relevant algorithms include Frank-Wolfe, mirror descent, and dual averaging) and discuss their relevance in machine learning. We provide a gentle introduction to structural optimization with FISTA (to optimize a sum of a smooth and a simple non-smooth term), saddle-point mirror prox (Nemirovski's alternative to Nesterov's smoothing), and a concise description of interior point methods. In stochastic optimization we discuss stochastic gradient descent, mini-batches, random coordinate descent, and sublinear algorithms. We also briefly touch upon convex relaxation of combinatorial problems and the use of randomness to round solutions, as well as random walks based methods.Comment: A previous version of the manuscript was titled "Theory of Convex Optimization for Machine Learning

    Collection of abstracts of the 24th European Workshop on Computational Geometry

    Get PDF
    International audienceThe 24th European Workshop on Computational Geomety (EuroCG'08) was held at INRIA Nancy - Grand Est & LORIA on March 18-20, 2008. The present collection of abstracts contains the 63 scientific contributions as well as three invited talks presented at the workshop

    Data-driven reconstruction methods for photoacoustic tomography:Learning structures by structured learning

    Get PDF
    Photoacoustic tomography (PAT) is an imaging technique with potential applications in various fields of biomedicine. By visualising vascular structures, PAT could help in the detection and diagnosis of diseases related to their dysregulation. In PAT, tissue is illuminated by light. After entering the tissue, the light undergoes scattering and absorption. The absorbed energy is transformed into an initial pressure by the photoacoustic effect, which travels to ultrasound detectors outside the tissue.This thesis is concerned with the inverse problem of the described physical process: what was the initial pressure in the tissue that gave rise to the detected pressure outside? The answer to this question is difficult to obtain when light penetration in tissue is not sufficient, the measurements are corrupted, or only a small number of detectors can be used in a limited geometry. For decades, the field of variational methods has come up with new approaches to solve these kind of problems. these kind of problems: the combination of new theory and clever algorithms has led to improved numerical results in many image reconstruction problems. In the past five years, previously state-of-the-art results were greatly surpassed by combining variational methods with artificial neural networks, a form of artificial intelligence.In this thesis we investigate several ways of combining data-driven artificial neural networks with model-driven variational methods. We combine the topics of photoacoustic tomography, inverse problems and artificial neural networks.Chapter 3 treats the variational problem in PAT and provides a framework in which hand-crafted regularisers can easily be compared. Both directional and higher-order total variation methods show improved results over direct methods for PAT with structures resembling vasculature.Chapter 4 provides a method to jointly solve the PAT reconstruction and segmentation problem for absorbing structures resembling vasculature. Artificial neural networks are embodied in the algorithmic structure of primal-dual methods, which are a popular way to solve variational problems. It is shown that a diverse training set is of utmost importance to solve multiple problems with one learned algorithm.Chapter 5 provides a convergence analysis for data-consistent networks, which combine classical regularisation methods with artificial neural networks. Numerical results are shown for an inverse problem that couples the Radon transform with a saturation problem for biomedical images.Chapter 6 explores the idea of fully-learned reconstruction by connecting two nonlinear autoencoders. By enforcing a dimensionality reduction in the artificial neural network, a joint manifold for measurements and images is learned. The method, coined learned SVD, provides advantages over other fully-learned methods in terms of interpretability and generalisation. Numerical results show high-quality reconstructions, even in the case where no information on the forward process is used.In this thesis, several ways of combining model-based methods with data-driven artificial neural networks were investigated. The resulting hybrid methods showed improved tomography reconstructions. By allowing data to improve a structured method, deeper vascular structures could be imaged with photoacoustic tomography.<br/

    Large bichromatic point sets admit empty monochromatic 4-gons

    No full text
    We consider a variation of a problem stated by Erd˝os and Szekeres in 1935 about the existence of a number fES(k) such that any set S of at least fES(k) points in general position in the plane has a subset of k points that are the vertices of a convex k-gon. In our setting the points of S are colored, and we say that a (not necessarily convex) spanned polygon is monochromatic if all its vertices have the same color. Moreover, a polygon is called empty if it does not contain any points of S in its interior. We show that any bichromatic set of n ≥ 5044 points in R2 in general position determines at least one empty, monochromatic quadrilateral (and thus linearly many).Postprint (published version
    • …
    corecore