60 research outputs found

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Accelerated algorithms for linearly constrained convex minimization

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ˆ˜๋ฆฌ๊ณผํ•™๋ถ€, 2014. 2. ๊ฐ•๋ช…์ฃผ.์„ ํ˜• ์ œํ•œ ์กฐ๊ฑด์˜ ์ˆ˜ํ•™์  ์ตœ์ ํ™”๋Š” ๋‹ค์–‘ํ•œ ์˜์ƒ ์ฒ˜๋ฆฌ ๋ฌธ์ œ์˜ ๋ชจ๋ธ๋กœ์„œ ์‚ฌ ์šฉ๋˜๊ณ  ์žˆ๋‹ค. ์ด ๋…ผ๋ฌธ์—์„œ๋Š” ์ด ์„ ํ˜• ์ œํ•œ ์กฐ๊ฑด์˜ ์ˆ˜ํ•™์  ์ตœ์ ํ™” ๋ฌธ์ œ๋ฅผ ํ’€๊ธฐ์œ„ํ•œ ๋น ๋ฅธ ์•Œ๊ณ ๋ฆฌ๋“ฌ๋“ค์„ ์†Œ๊ฐœํ•˜๊ณ ์ž ํ•œ๋‹ค. ์šฐ๋ฆฌ๊ฐ€ ์ œ์•ˆํ•˜๋Š” ๋ฐฉ๋ฒ•๋“ค ์€ ๊ณตํ†ต์ ์œผ๋กœ Nesterov์— ์˜ํ•ด์„œ ๊ฐœ๋ฐœ๋˜์—ˆ๋˜ ๊ฐ€์†ํ™”ํ•œ ํ”„๋ก์‹œ๋ง ๊ทธ๋ ˆ๋”” ์–ธํŠธ ๋ฐฉ๋ฒ•์—์„œ ์‚ฌ์šฉ๋˜์—ˆ๋˜ ๋ณด์™ธ๋ฒ•์„ ๊ธฐ์ดˆ๋กœ ํ•˜๊ณ  ์žˆ๋‹ค. ์—ฌ๊ธฐ์—์„œ ์šฐ๋ฆฌ๋Š” ํฌ๊ฒŒ๋ณด์•„์„œ ๋‘๊ฐ€์ง€ ์•Œ๊ณ ๋ฆฌ๋“ฌ์„ ์ œ์•ˆํ•˜๊ณ ์ž ํ•œ๋‹ค. ์ฒซ๋ฒˆ์งธ ๋ฐฉ๋ฒ•์€ ๊ฐ€์†ํ™”ํ•œ Bregman ๋ฐฉ๋ฒ•์ด๋ฉฐ, ์••์ถ•์„ผ์‹ฑ๋ฌธ์ œ์— ์ ์šฉํ•˜์—ฌ์„œ ์›๋ž˜์˜ Bregman ๋ฐฉ๋ฒ•๋ณด๋‹ค ๊ฐ€์†ํ™”ํ•œ ๋ฐฉ๋ฒ•์ด ๋” ๋น ๋ฆ„์„ ํ™•์ธํ•œ๋‹ค. ๋‘๋ฒˆ์งธ ๋ฐฉ๋ฒ•์€ ๊ฐ€์†ํ™”ํ•œ ์–ด๊ทธ๋จผํ‹ฐ๋“œ ๋ผ๊ทธ๋ž‘์ง€์•ˆ ๋ฐฉ๋ฒ•์„ ํ™•์žฅํ•œ ๊ฒƒ์ธ๋ฐ, ์–ด๊ทธ๋จผํ‹ฐ๋“œ ๋ผ๊ทธ๋ž‘์ง€์•ˆ ๋ฐฉ๋ฒ•์€ ๋‚ด๋ถ€ ๋ฌธ์ œ๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ๊ณ , ์ด๋Ÿฐ ๋‚ด๋ถ€๋ฌธ์ œ๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ์ •ํ™•ํ•œ ๋‹ต์„ ๊ณ„์‚ฐํ•  ์ˆ˜ ์—†๋‹ค. ๊ทธ๋ ‡๊ธฐ ๋•Œ๋ฌธ์— ์ด๋Ÿฐ ๋‚ด๋ถ€๋ฌธ์ œ๋ฅผ ์ ๋‹นํ•œ ์กฐ๊ฑด์„ ๋งŒ์กฑํ•˜๋„๋ก ๋ถ€์ •ํ™•ํ•˜ ๊ฒŒ ํ’€๋”๋ผ๋„ ๊ฐ€์†ํ™”ํ•œ ์–ด๊ทธ๋จผํ‹ฐ๋“œ ๋ผ๊ทธ๋ž‘์ง€ ๋ฐฉ๋ฒ•์ด ์ •ํ™•ํ•˜๊ฒŒ ๋‚ด๋ถ€๋ฌธ์ œ๋ฅผ ํ’€๋•Œ์™€ ๊ฐ™์€ ์ˆ˜๋ ด์„ฑ์„ ๊ฐ–๋Š” ์กฐ๊ฑด์„ ์ œ์‹œํ•œ๋‹ค. ์šฐ๋ฆฌ๋Š” ๋˜ํ•œ ๊ฐ€์†ํ™”ํ•œ ์–ผํ„ฐ ๋„ค์ดํŒ… ๋””๋ ‰์…˜ ๋ฐฉ๋ฒ•๋ฐ ๋Œ€ํ•ด์„œ๋„ ๋น„์Šทํ•œ ๋‚ด์šฉ์„ ์ „๊ฐœํ•œ๋‹ค.Abstract i 1 Introduction 1 2 Previous Methods 5 2.1 Mathematical Preliminary . . . . . . . . . . . . . . . . . . . . 5 2.2 The algorithms for solving the linearly constrained convex minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.1 Augmented Lagrangian Method . . . . . . . . . . . . . 8 2.2.2 Bregman Methods . . . . . . . . . . . . . . . . . . . . 9 2.2.3 Alternating direction method of multipliers . . . . . . . 13 2.3 The accelerating algorithms for unconstrained convex minimization problem . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3.1 Fast inexact iterative shrinkage thresholding algorithm 16 2.3.2 Inexact accelerated proximal point method . . . . . . . 19 3 Proposed Algorithms 23 3.1 Proposed Algorithm 1 : Accelerated Bregman method . . . . . 23 3.1.1 Equivalence to the accelerated augmented Lagrangian method . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.1.2 Complexity of the accelerated Bregman method . . . . 27 3.2 Proposed Algorithm 2 : I-AALM . . . . . . . . . . . . . . . . 35 3.3 Proposed Algorithm 3 : I-AADMM . . . . . . . . . . . . . . . 43 3.4 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.4.1 Comparison to Bregman method with accelerated Bregman method . . . . . . . . . . . . . . . . . . . . . . . . 54 3.4.2 Numerical results of inexact accelerated augmented Lagrangian method using various subproblem solvers . . . 60 3.4.3 Comparison to the inexact accelerated augmented Lagrangian method with other methods . . . . . . . . . . 63 3.4.4 Inexact accelerated alternating direction method of multipliers for Multiplicative Noise Removal . . . . . . . . 69 4 Conclusion 86 Abstract (in Korean) 94Docto

    Rigorous optimization recipes for sparse and low rank inverse problems with applications in data sciences

    Get PDF
    Many natural and man-made signals can be described as having a few degrees of freedom relative to their size due to natural parameterizations or constraints; examples include bandlimited signals, collections of signals observed from multiple viewpoints in a network-of-sensors, and per-flow traffic measurements of the Internet. Low-dimensional models (LDMs) mathematically capture the inherent structure of such signals via combinatorial and geometric data models, such as sparsity, unions-of-subspaces, low-rankness, manifolds, and mixtures of factor analyzers, and are emerging to revolutionize the way we treat inverse problems (e.g., signal recovery, parameter estimation, or structure learning) from dimensionality-reduced or incomplete data. Assuming our problem resides in a LDM space, in this thesis we investigate how to integrate such models in convex and non-convex optimization algorithms for significant gains in computational complexity. We mostly focus on two LDMs: (i)(i) sparsity and (ii)(ii) low-rankness. We study trade-offs and their implications to develop efficient and provable optimization algorithms, and--more importantly--to exploit convex and combinatorial optimization that can enable cross-pollination of decades of research in both

    Robust Algorithms for Low-Rank and Sparse Matrix Models

    Full text link
    Data in statistical signal processing problems is often inherently matrix-valued, and a natural first step in working with such data is to impose a model with structure that captures the distinctive features of the underlying data. Under the right model, one can design algorithms that can reliably tease weak signals out of highly corrupted data. In this thesis, we study two important classes of matrix structure: low-rankness and sparsity. In particular, we focus on robust principal component analysis (PCA) models that decompose data into the sum of low-rank and sparse (in an appropriate sense) components. Robust PCA models are popular because they are useful models for data in practice and because efficient algorithms exist for solving them. This thesis focuses on developing new robust PCA algorithms that advance the state-of-the-art in several key respects. First, we develop a theoretical understanding of the effect of outliers on PCA and the extent to which one can reliably reject outliers from corrupted data using thresholding schemes. We apply these insights and other recent results from low-rank matrix estimation to design robust PCA algorithms with improved low-rank models that are well-suited for processing highly corrupted data. On the sparse modeling front, we use sparse signal models like spatial continuity and dictionary learning to develop new methods with important adaptive representational capabilities. We also propose efficient algorithms for implementing our methods, including an extension of our dictionary learning algorithms to the online or sequential data setting. The underlying theme of our work is to combine ideas from low-rank and sparse modeling in novel ways to design robust algorithms that produce accurate reconstructions from highly undersampled or corrupted data. We consider a variety of application domains for our methods, including foreground-background separation, photometric stereo, and inverse problems such as video inpainting and dynamic magnetic resonance imaging.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143925/1/brimoor_1.pd

    A Fast Reduced-Space Algorithmic Framework for Sparse Optimization

    Get PDF
    Optimization is a crucial scientific tool used throughout applied mathematics. In optimization one typically seeks the lowest value of a chosen objective function from among a set of allowable inputs to that function, i.e., to compute a minimizer. Once an optimization problem is formulated for a particular task of interest, an algorithm for locating such a minimizer is employed. For many applications, the optimization problem may possess one or more solutions, some of which may not be desirable from the perspective of the application. In such settings, a popular approach is to augment the objective function through the use of regularization, which should be carefully chosen to ensure that solutions of the regularized optimization problem are useful to the application of interest. Perhaps the most popular type of regularization is l1-regularization, which has received special attention in the last two decades. Motivation for incorporating l1-regularization is its sparsity-inducing and shrinkage properties, which have demonstrated its utility in improving the interpretation and accuracy of model estimation in both theoretical and practical aspects. Many methods have been proposed for solving l1-regularization problems. Roughly, there are first-order methods (i.e., those that only use first-order derivatives), which have small computational iteration complexity but often are inefficient on realistic applications, and second-order methods (i.e., those that use first- and second-order derivatives), which have large computational iteration complexity but are robust and efficient in terms of the number of iterations typically required. In this thesis we present a new second-order framework that aims to balance the strengths of first-order and second-order methods. Specifically, our framework uses a limited amount of second-derivative information by making use of a mechanism for predicting those variables that will be zero at a solution. In this manner, the computational iteration complexity can be controlled. Moreover, by using second-derivative information within certain computed subspaces, our framework is highly efficient and robust in terms of the overall number of iterations typically required. We present numerical comparisons to other state-of-the-art first and second-order methods that validate our approach and further investigate an implementation of our approach that uses parallel computation
    • โ€ฆ
    corecore