1,688 research outputs found
Fast Multi-Scale Detail Decomposition via Accelerated Iterative Shrinkage
International audienceWe present a fast solution for performing multi-scale detail decomposition. The proposed method is based on an accelerated iterative shrinkage algorithm, able to process high definition color images in real-time on modern GPUs. Our strategy to accelerate the smoothing process is based on the use of first order proximal operators. We use the approximation to both designing suitable shrinkage operators as well as deriving a proper warm-start solution. The method supports full color filtering and can be implemented efficiently and easily on both the CPU and the GPU. We demonstrate the performance of the proposed approach on fast multi-scale detail manipulation of low and high dynamic range images and show that we get good quality results with reduced processing time
PEAR: PEriodic And fixed Rank separation for fast fMRI
In functional MRI (fMRI), faster acquisition via undersampling of data can
improve the spatial-temporal resolution trade-off and increase statistical
robustness through increased degrees-of-freedom. High quality reconstruction of
fMRI data from undersampled measurements requires proper modeling of the data.
We present an fMRI reconstruction approach based on modeling the fMRI signal as
a sum of periodic and fixed rank components, for improved reconstruction from
undersampled measurements. We decompose the fMRI signal into a component which
a has fixed rank and a component consisting of a sum of periodic signals which
is sparse in the temporal Fourier domain. Data reconstruction is performed by
solving a constrained problem that enforces a fixed, moderate rank on one of
the components, and a limited number of temporal frequencies on the other. Our
approach is coined PEAR - PEriodic And fixed Rank separation for fast fMRI.
Experimental results include purely synthetic simulation, a simulation with
real timecourses and retrospective undersampling of a real fMRI dataset.
Evaluation was performed both quantitatively and visually versus ground truth,
comparing PEAR to two additional recent methods for fMRI reconstruction from
undersampled measurements. Results demonstrate PEAR's improvement in estimating
the timecourses and activation maps versus the methods compared against at
acceleration ratios of R=8,16 (for simulated data) and R=6.66,10 (for real
data). PEAR results in reconstruction with higher fidelity than when using a
fixed-rank based model or a conventional Low-rank+Sparse algorithm. We have
shown that splitting the functional information between the components leads to
better modeling of fMRI, over state-of-the-art methods
First order algorithms in variational image processing
Variational methods in imaging are nowadays developing towards a quite
universal and flexible tool, allowing for highly successful approaches on tasks
like denoising, deblurring, inpainting, segmentation, super-resolution,
disparity, and optical flow estimation. The overall structure of such
approaches is of the form ; where the functional is a data fidelity term also
depending on some input data and measuring the deviation of from such
and is a regularization functional. Moreover is a (often linear)
forward operator modeling the dependence of data on an underlying image, and
is a positive regularization parameter. While is often
smooth and (strictly) convex, the current practice almost exclusively uses
nonsmooth regularization functionals. The majority of successful techniques is
using nonsmooth and convex functionals like the total variation and
generalizations thereof or -norms of coefficients arising from scalar
products with some frame system. The efficient solution of such variational
problems in imaging demands for appropriate algorithms. Taking into account the
specific structure as a sum of two very different terms to be minimized,
splitting algorithms are a quite canonical choice. Consequently this field has
revived the interest in techniques like operator splittings or augmented
Lagrangians. Here we shall provide an overview of methods currently developed
and recent results as well as some computational studies providing a comparison
of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure
Robust Rotation Synchronization via Low-rank and Sparse Matrix Decomposition
This paper deals with the rotation synchronization problem, which arises in
global registration of 3D point-sets and in structure from motion. The problem
is formulated in an unprecedented way as a "low-rank and sparse" matrix
decomposition that handles both outliers and missing data. A minimization
strategy, dubbed R-GoDec, is also proposed and evaluated experimentally against
state-of-the-art algorithms on simulated and real data. The results show that
R-GoDec is the fastest among the robust algorithms.Comment: The material contained in this paper is part of a manuscript
submitted to CVI
GPU-Accelerated Algorithms for Compressed Signals Recovery with Application to Astronomical Imagery Deblurring
Compressive sensing promises to enable bandwidth-efficient on-board
compression of astronomical data by lifting the encoding complexity from the
source to the receiver. The signal is recovered off-line, exploiting GPUs
parallel computation capabilities to speedup the reconstruction process.
However, inherent GPU hardware constraints limit the size of the recoverable
signal and the speedup practically achievable. In this work, we design parallel
algorithms that exploit the properties of circulant matrices for efficient
GPU-accelerated sparse signals recovery. Our approach reduces the memory
requirements, allowing us to recover very large signals with limited memory. In
addition, it achieves a tenfold signal recovery speedup thanks to ad-hoc
parallelization of matrix-vector multiplications and matrix inversions.
Finally, we practically demonstrate our algorithms in a typical application of
circulant matrices: deblurring a sparse astronomical image in the compressed
domain
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
- …