11 research outputs found

    Learning the Morphological Diversity

    Get PDF
    International audienceThis article proposes a new method for image separation into a linear combination of morphological components. Sparsity in global dictionaries is used to extract the cartoon and oscillating content of the image. Complicated texture patterns are extracted by learning adapted local dictionaries that sparsify patches in the image. These global and local sparsity priors together with the data fidelity define a non-convex energy and the separation is obtained as a stationary point of this energy. This variational optimization is extended to solve more general inverse problems such as inpainting. A new adaptive morphological component analysis algorithm is derived to find a stationary point of the energy. Using adapted dictionaries learned from data allows to circumvent some difficulties faced by fixed dictionaries. Numerical results demonstrate that this adaptivity is indeed crucial to capture complex texture patterns

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of â„“2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    A Megavoltage CT Image Enhancement Method for Image-Guided and Adaptive Helical TomoTherapy

    Get PDF
    Purpose: To propose a novel method to improve the mega-voltage CT (MVCT) image quality for helical TomoTherapy while maintaining the stability on dose calculation.Materials and Methods: The Block-Matching 3D-transform (BM3D) and Discriminative Feature Representation (DFR) methods were combined into a novel BM3D + DFR method for their respective advantages. A phantom (Catphan504) and three serials of clinical (head & neck, chest, and pelvis) MVCT images from 30 patients were acquired using the helical TomoTherapy system. The contrast-to-noise ratio (CNR) and edge detection algorithm (canny) was employed for image quality comparisons between the original and BM3D + DFR enhanced MVCT. A simulated rectangular field of 6 MV X-ray beams were vertically delivered on the original and post-processed MVCT serials of the same CT density phantom, and the dose curves on both serials were compared to test the effects of image enhancement on dose calculation accuracy.Results: In total, 466 transversal MVCT slices were acquired and processed by both BM3D and the proposed BM3D + DFR methods. Compared to the original MVCT image, the BM3D + DFR method presented a remarkable improvement in terms of the soft tissue contrast and noise reduction. For the phantom image, the CNR of the region of interest (ROI) was improved from 1.70 to 4.03. The average CNR of ROIs for 10 patients from each anatomical group, were increased significantly from 1.45 ± 1.51 to 2.09 ± 1.68 for the head & neck (p < 0.001), from 0.92 ± 0.78 to 1.36 ± 0.85 for the chest (p < 0.001), and from 1.12 ± 1.22 to 1.76 ± 1.31 for the pelvis (p < 0.001), respectively. The canny edge detection operator showed that BM3D + DFR provided clearer organ boundaries with less chaos. The root-mean-square of the dosimetry difference on the iso-center passed horizontal dose profile curves and vertical percentage depth dose curves were only 0.09% and 0.06%, respectively.Conclusions: The proposed BM3D + DFR method is feasible to improve the soft tissue contrast for the original MVCT images with coincidence in dose calculation and without compromising resolution. After integration in clinical workflow, the post-processed MVCT may be better applied on image-guided and adaptive helical TomoTherapy

    An introduction to continuous optimization for imaging

    No full text
    International audienceA large number of imaging problems reduce to the optimization of a cost function , with typical structural properties. The aim of this paper is to describe the state of the art in continuous optimization methods for such problems, and present the most successful approaches and their interconnections. We place particular emphasis on optimal first-order schemes that can deal with typical non-smooth and large-scale objective functions used in imaging problems. We illustrate and compare the different algorithms using classical non-smooth problems in imaging, such as denoising and deblurring. Moreover, we present applications of the algorithms to more advanced problems, such as magnetic resonance imaging, multilabel image segmentation, optical flow estimation, stereo matching, and classification

    Underwater image restoration: super-resolution and deblurring via sparse representation and denoising by means of marine snow removal

    Get PDF
    Underwater imaging has been widely used as a tool in many fields, however, a major issue is the quality of the resulting images/videos. Due to the light's interaction with water and its constituents, the acquired underwater images/videos often suffer from a significant amount of scatter (blur, haze) and noise. In the light of these issues, this thesis considers problems of low-resolution, blurred and noisy underwater images and proposes several approaches to improve the quality of such images/video frames. Quantitative and qualitative experiments validate the success of proposed algorithms

    A Review of Adaptive Image Representations

    Full text link

    Apprentissage d'arbres de convolutions pour la représentation parcimonieuse

    Get PDF
    Le domaine de l'apprentissage de dictionnaire est le sujet d'attentions croissantes durant cette dernière décennie. L'apprentissage de dictionnaire est une approche adaptative de la représentation parcimonieuse de données. Les méthodes qui constituent l'état de l'art en DL donnent d'excellentes performances en approximation et débruitage. Cependant, la complexité calculatoire associée à ces méthodes restreint leur utilisation à de toutes petites images ou "patchs". Par conséquent, il n'est pas possible d'utiliser l'apprentissage de dictionnaire pour des applications impliquant de grandes images, telles que des images de télédétection. Dans cette thèse, nous proposons et étudions un modèle original d'apprentissage de dictionnaire, combinant une méthode de décomposition des images par convolution et des structures d'arbres de convolution pour les dictionnaires. Ce modèle a pour but de fournir des algorithmes efficaces pour traiter de grandes images, sans les décomposer en patchs. Dans la première partie, nous étudions comment optimiser une composition de convolutions de noyaux parcimonieux, un problème de factorisation matricielle non convexe. Ce modèle est alors utilisé pour construire des atomes de dictionnaire. Dans la seconde partie, nous proposons une structure de dictionnaire basée sur des arbres de convolution, ainsi qu'un algorithme de mise à jour de dictionnaire adapté à cette structure. Enfin, une étape de décomposition parcimonieuse est ajoutée à cet algorithme dans la dernière partie. À chaque étape de développement de la méthode, des expériences numériques donnent un aperçu de ses capacités d'approximation.The dictionary learning problem has received increasing attention for the last ten years. DL is an adaptive approach for sparse data representation. Many state-of-the-art DL methods provide good performances for problems such as approximation, denoising and inverse problems. However, their numerical complexity restricts their use to small image patches. Thus, dictionary learning does not capture large features and is not a viable option for many applications handling large images, such as those encountered in remote sensing. In this thesis, we propose and study a new model for dictionary learning, combining convolutional sparse coding and dictionaries defined by convolutional tree structures. The aim of this model is to provide efficient algorithms for large images, avoiding the decomposition of these images into patches. In the first part, we study the optimization of a composition of convolutions with sparse kernels, to reach a target atom (such as a cosine, wavelet or curvelet). This is a non-convex matrix factorization problem. We propose a resolution method based on a Gaus-Seidel scheme, which produces good approximations of target atoms and whose complexity is linear with respect to the image size. Moreover, numerical experiments show that it is possible to find a global minimum. In the second part, we introduce a dictionary structure based on convolutional trees. We propose a dictionary update algorithm adapted to this structure and which complexity remains linear with respect to the image size. Finally, a sparse coding step is added to the algorithm in the last part. For each evolution of the proposed method, we illustrate its approximation abilities with numerical experiments
    corecore