45 research outputs found

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    State of the art in 2D content representation and compression

    Get PDF
    Livrable D1.3 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D3.1 du projet

    Medical Image Compression using Lifting based New Wavelet Transforms

    Get PDF
    In this paper, the construction of new lifting based wavelets by a new method of calculating lifting coefficients is presented. First of all, new basis functions are utilized to ease new orthogonal traditional wavelets. Then by using the decomposing poly-phase matrix the lifting steps are calculated using a simplified method. The interesting feature of lifting scheme is that the construction of wavelet is derived in spatial domain only; hence the difficulty in the design of traditional wavelets is avoided. Lifting scheme was used to generate second generation wavelets which are not necessarily translation and dilation of one particular function. Short and sharp basis functions are chosen so as to obtain the non-uniform nature of usual image classes. Implemented wavelets are applied on a number of medical images. It was found that the compression ratio (CR) and Peak Signal to Noise Ratio (PSNR) are far ahead of that are obtained with the popular traditional wavelets as well as the successful 5/3 and 9/7 lifting based wavelets. Set Partitioning in Hierarchical Trees (SPIHT) is used to incorporate compression.DOI:http://dx.doi.org/10.11591/ijece.v4i5.596

    Learning Theory and Approximation

    Get PDF
    The main goal of this workshop – the third one of this type at the MFO – has been to blend mathematical results from statistical learning theory and approximation theory to strengthen both disciplines and use synergistic effects to work on current research questions. Learning theory aims at modeling unknown function relations and data structures from samples in an automatic manner. Approximation theory is naturally used for the advancement and closely connected to the further development of learning theory, in particular for the exploration of new useful algorithms, and for the theoretical understanding of existing methods. Conversely, the study of learning theory also gives rise to interesting theoretical problems for approximation theory such as the approximation and sparse representation of functions or the construction of rich kernel reproducing Hilbert spaces on general metric spaces. This workshop has concentrated on the following recent topics: Pitchfork bifurcation of dynamical systems arising from mathematical foundations of cell development; regularized kernel based learning in the Big Data situation; deep learning; convergence rates of learning and online learning algorithms; numerical refinement algorithms to learning; statistical robustness of regularized kernel based learning

    An introduction to continuous optimization for imaging

    No full text
    International audienceA large number of imaging problems reduce to the optimization of a cost function , with typical structural properties. The aim of this paper is to describe the state of the art in continuous optimization methods for such problems, and present the most successful approaches and their interconnections. We place particular emphasis on optimal first-order schemes that can deal with typical non-smooth and large-scale objective functions used in imaging problems. We illustrate and compare the different algorithms using classical non-smooth problems in imaging, such as denoising and deblurring. Moreover, we present applications of the algorithms to more advanced problems, such as magnetic resonance imaging, multilabel image segmentation, optical flow estimation, stereo matching, and classification

    Wavelet and Multiscale Methods

    Get PDF
    [no abstract available

    Riemannian Flows for Supervised and Unsupervised Geometric Image Labeling

    Get PDF
    In this thesis we focus on the image labeling problem, which is used as a subroutine in many image processing applications. Our work is based on the assignment flow which was recently introduced as a novel geometric approach to the image labeling problem. This flow evolves over time on the manifold of row-stochastic matrices, whose elements represent label assignments as assignment probabilities. The strict separation of assignment manifold and feature space enables the data to lie in any metric space, while a smoothing operation on the assignment manifold results in an unbiased and spatially regularized labeling. The first part of this work focuses on theoretical statements about the asymptotic behavior of the assignment flow. We show under weak assumptions on the parameters that the assignment flow for data in general position converges towards integral probabilities and thus ensures unique assignment decisions. Furthermore, we investigate the stability of possible limit points depending on the input data and parameters. For stable limits, we derive conditions that allow early evidence of convergence towards these limits and thus provide convergence guarantees. In the second part, we extend the assignment flow approach in order to impose global convex constraints on the labeling results based on linear filter statistics of the assignments. The corresponding filters are learned from examples using an eigendecomposition. The effectiveness of the approach is numerically demonstrated in several academic labeling scenarios. In the last part of this thesis we consider the situation in which no labels are given and therefore these prototypical elements have to be determined from the data as well. To this end we introduce an additional flow on the feature manifold, which is coupled to the assignment flow. The resulting flow adapts the prototypes in time to the assignment probabilities. The simultaneous adaptation and assignment of prototypes not only provides suitable prototypes, but also improves the resulting image segmentation, which is demonstrated by experiments. For this approach it is assumed that the data lie on a Riemannian manifold. We elaborate the approach for a range of manifolds that occur in applications and evaluate the resulting approaches in numerical experiments
    corecore