13,033 research outputs found

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Simultaneous Bayesian Sparse Approximation with Structured Sparse Models

    Get PDF
    Sparse approximation is key to many signal processing, image processing and machine learning applications. If multiple signals maintain some degree of dependency, for example the support sets are statistically related, then it will generally be advantageous to jointly estimate the sparse representation vectors from the measurements vectors as opposed to solving for each signal individually. In this paper, we propose simultaneous sparse Bayesian learning (SBL) for joint sparse approximation with two structured sparse models (SSMs), where one is row-sparse with embedded element-sparse, and the other one is row-sparse plus element-sparse. While SBL has attracted much attention as a means to deal with a single sparse approximation problem, it is not obvious how to extend SBL to SSMs. By capitalizing on a dual-space view of existing convex methods for SMs, we showcase the precision component model and covariance component model for SSMs, where both models involve a common hyperparameter and an innovation hyperparameter that together control the prior variance for each coefficient. The statistical perspective of precision component vs. covariance component models unfolds the intrinsic mechanism in SSMs, and also leads to our development of SBL-inspired cost functions for SSMs. Centralized algorithms, that include â„“1 and â„“2 reweighting algorithms, and consensus based decentralized algorithms are developed for simultaneous sparse approximation with SSMs. In addition, theoretical analysis is conducted to provide valuable insights into the proposed approach, which includes global minima analysis of the SBLinspired nonconvex cost functions and convergence analysis of the proposed â„“1 reweighting algorithms for SSMs. Superior performance of the proposed algorithms is demonstrated by numerical experiments.This is the author accepted manuscript. The final version is available from IEEE at http://dx.doi.org/10.1109/TSP.2016.2605067

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1
    • …
    corecore