1,659 research outputs found
A Theoretically Guaranteed Deep Optimization Framework for Robust Compressive Sensing MRI
Magnetic Resonance Imaging (MRI) is one of the most dynamic and safe imaging
techniques available for clinical applications. However, the rather slow speed
of MRI acquisitions limits the patient throughput and potential indi cations.
Compressive Sensing (CS) has proven to be an efficient technique for
accelerating MRI acquisition. The most widely used CS-MRI model, founded on the
premise of reconstructing an image from an incompletely filled k-space, leads
to an ill-posed inverse problem. In the past years, lots of efforts have been
made to efficiently optimize the CS-MRI model. Inspired by deep learning
techniques, some preliminary works have tried to incorporate deep architectures
into CS-MRI process. Unfortunately, the convergence issues (due to the
experience-based networks) and the robustness (i.e., lack real-world noise
modeling) of these deeply trained optimization methods are still missing. In
this work, we develop a new paradigm to integrate designed numerical solvers
and the data-driven architectures for CS-MRI. By introducing an optimal
condition checking mechanism, we can successfully prove the convergence of our
established deep CS-MRI optimization scheme. Furthermore, we explicitly
formulate the Rician noise distributions within our framework and obtain an
extended CS-MRI network to handle the real-world nosies in the MRI process.
Extensive experimental results verify that the proposed paradigm outperforms
the existing state-of-the-art techniques both in reconstruction accuracy and
efficiency as well as robustness to noises in real scene
Combining local regularity estimation and total variation optimization for scale-free texture segmentation
Texture segmentation constitutes a standard image processing task, crucial to
many applications. The present contribution focuses on the particular subset of
scale-free textures and its originality resides in the combination of three key
ingredients: First, texture characterization relies on the concept of local
regularity ; Second, estimation of local regularity is based on new multiscale
quantities referred to as wavelet leaders ; Third, segmentation from local
regularity faces a fundamental bias variance trade-off: In nature, local
regularity estimation shows high variability that impairs the detection of
changes, while a posteriori smoothing of regularity estimates precludes from
locating correctly changes. Instead, the present contribution proposes several
variational problem formulations based on total variation and proximal
resolutions that effectively circumvent this trade-off. Estimation and
segmentation performance for the proposed procedures are quantified and
compared on synthetic as well as on real-world textures
Flexible Multi-layer Sparse Approximations of Matrices and Applications
The computational cost of many signal processing and machine learning
techniques is often dominated by the cost of applying certain linear operators
to high-dimensional vectors. This paper introduces an algorithm aimed at
reducing the complexity of applying linear operators in high dimension by
approximately factorizing the corresponding matrix into few sparse factors. The
approach relies on recent advances in non-convex optimization. It is first
explained and analyzed in details and then demonstrated experimentally on
various problems including dictionary learning for image denoising, and the
approximation of large matrices arising in inverse problems
CMB map restoration
Estimating the cosmological microwave background is of utmost importance for
cosmology. However, its estimation from full-sky surveys such as WMAP or more
recently Planck is challenging: CMB maps are generally estimated via the
application of some source separation techniques which never prevent the final
map from being contaminated with noise and foreground residuals. These spurious
contaminations whether noise or foreground residuals are well-known to be a
plague for most cosmologically relevant tests or evaluations; this includes CMB
lensing reconstruction or non-Gaussian signatures search. Noise reduction is
generally performed by applying a simple Wiener filter in spherical harmonics;
however this does not account for the non-stationarity of the noise. Foreground
contamination is usually tackled by masking the most intense residuals detected
in the map, which makes CMB evaluation harder to perform. In this paper, we
introduce a novel noise reduction framework coined LIW-Filtering for Linear
Iterative Wavelet Filtering which is able to account for the noise spatial
variability thanks to a wavelet-based modeling while keeping the highly desired
linearity of the Wiener filter. We further show that the same filtering
technique can effectively perform foreground contamination reduction thus
providing a globally cleaner CMB map. Numerical results on simulated but
realistic Planck data are provided
Graph Signal Processing: Overview, Challenges and Applications
Research in Graph Signal Processing (GSP) aims to develop tools for
processing data defined on irregular graph domains. In this paper we first
provide an overview of core ideas in GSP and their connection to conventional
digital signal processing. We then summarize recent developments in developing
basic GSP tools, including methods for sampling, filtering or graph learning.
Next, we review progress in several application areas using GSP, including
processing and analysis of sensor network data, biological data, and
applications to image processing and machine learning. We finish by providing a
brief historical perspective to highlight how concepts recently developed in
GSP build on top of prior research in other areas.Comment: To appear, Proceedings of the IEE
Sparse Modeling for Image and Vision Processing
In recent years, a large amount of multi-disciplinary research has been
conducted on sparse models and their applications. In statistics and machine
learning, the sparsity principle is used to perform model selection---that is,
automatically selecting a simple model among a large collection of them. In
signal processing, sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the corresponding
tools have been widely adopted by several scientific communities such as
neuroscience, bioinformatics, or computer vision. The goal of this monograph is
to offer a self-contained view of sparse modeling for visual recognition and
image processing. More specifically, we focus on applications where the
dictionary is learned and adapted to data, yielding a compact representation
that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics
and Visio
- …