58 research outputs found

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Computing Large-Scale Matrix and Tensor Decomposition with Structured Factors: A Unified Nonconvex Optimization Perspective

    Full text link
    The proposed article aims at offering a comprehensive tutorial for the computational aspects of structured matrix and tensor factorization. Unlike existing tutorials that mainly focus on {\it algorithmic procedures} for a small set of problems, e.g., nonnegativity or sparsity-constrained factorization, we take a {\it top-down} approach: we start with general optimization theory (e.g., inexact and accelerated block coordinate descent, stochastic optimization, and Gauss-Newton methods) that covers a wide range of factorization problems with diverse constraints and regularization terms of engineering interest. Then, we go `under the hood' to showcase specific algorithm design under these introduced principles. We pay a particular attention to recent algorithmic developments in structured tensor and matrix factorization (e.g., random sketching and adaptive step size based stochastic optimization and structure-exploiting second-order algorithms), which are the state of the art---yet much less touched upon in the literature compared to {\it block coordinate descent} (BCD)-based methods. We expect that the article to have an educational values in the field of structured factorization and hope to stimulate more research in this important and exciting direction.Comment: Final Version; to appear in IEEE Signal Processing Magazine; title revised to comply with the journal's rul

    Stabilization Algorithms for Large-Scale Problems

    No full text

    Regularization techniques based on Krylov subspace methods for ill-posed linear systems

    Get PDF
    This thesis is focussed on the regularization of large-scale linear discrete ill-posed problems. Problems of this kind arise in a variety of applications, and, in a continuous setting, they are often formulated as Fredholm integral equations of the first kind, with smooth kernel, modeling an inverse problem (i.e., the unknown of these equations is the cause of an observed effect). Upon discretization, linear systems whose coefficient matrix is ill-conditioned and whose right-hand side vector is affected by some perturbations (noise) must be solved. %Because of the ill-conditioning of the system matrix and the errors in the data, In this setting, a straightforward solution of the available linear system is meaningless because the computed solution would be dominated by errors; moreover, for large-scale problems, solving directly the available system could be computationally infeasible. Therefore, in order to recover a meaningful approximation of the original solution, some regularization must be employed, i.e., the original linear system must be replaced by a nearby problem having better numerical properties. The first part of this thesis (Chapter 1) gives an overview on inverse problems and briefly describes their properties in the continuous setting; then, in a discrete setting, the most well-known regularization techniques relying on some factorization of the system matrix are surveyed. The remaining part of the thesis is concerned with iterative regularization strategies based on some Krylov subspaces methods, which are well-suited for large-scale problems. More precisely, in Chapter 2, an extensive overview of the Krylov subspace methods most successfully employed with regularizing purposes is presented: historically, the first methods to be used were related to the normal equations and many issues linked to the analysis of their behavior have already been addressed. The situation is different for the methods based on the Arnoldi algorithm, whose regularizing properties are not well understood or widely accepted, yet. Therefore, still in Chapter 2, a novel analysis of the approximation properties of the Arnoldi algorithm when employed to solve linear discrete ill-posed problems is presented, in order to provide some insight on the use of Arnoldi-based methods for regularization purposes. The core results of this thesis are related to class of the Arnoldi-Tikhonov methods, first introduced about ten years ago, and described in Chapter 3. The Arnoldi-Tikhonov approach to regularization consists in solving a Tikhonov-regularized problem by means of an iterative strategy based on the Arnoldi algorithm. With respect to a purely iterative approach to regularization, Arnoldi-Tikhonov methods can deliver more accurate approximations by easily incorporating some information about the behavior of the solution within the reconstruction process. In connection with Arnoldi-Tikhonov methods, many open questions still remain, the most significant ones being the choice of the regularization parameters and the choice of the regularization matrices. The first issues are addressed in Chapter 4, where two new efficient and original parameter selection strategies to be employed with the Arnoldi-Tikhonov methods are derived and extensively tested; still in Chapter 4, a novel extension of the Arnoldi-Tikhonov method to the multi-parameter Tikhonov regularization case is described. Finally, in Chapter 5, two efficient and innovative schemes to approximate the solution of nonlinear regularized problems are presented: more precisely, the regularization terms originally defined by the 1-norm or by the Total Variation functional are approximated by adaptively updating suitable regularization matrices within the Arnoldi-Tikhonov iterations. Along this thesis, the results of many numerical experiments are presented in order to show the performance of the newly proposed methods, and to compare them with the already existing strategies

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
    • …
    corecore