406 research outputs found
An Efficient Algorithm for Video Super-Resolution Based On a Sequential Model
In this work, we propose a novel procedure for video super-resolution, that
is the recovery of a sequence of high-resolution images from its low-resolution
counterpart. Our approach is based on a "sequential" model (i.e., each
high-resolution frame is supposed to be a displaced version of the preceding
one) and considers the use of sparsity-enforcing priors. Both the recovery of
the high-resolution images and the motion fields relating them is tackled. This
leads to a large-dimensional, non-convex and non-smooth problem. We propose an
algorithmic framework to address the latter. Our approach relies on fast
gradient evaluation methods and modern optimization techniques for
non-differentiable/non-convex problems. Unlike some other previous works, we
show that there exists a provably-convergent method with a complexity linear in
the problem dimensions. We assess the proposed optimization method on {several
video benchmarks and emphasize its good performance with respect to the state
of the art.}Comment: 37 pages, SIAM Journal on Imaging Sciences, 201
Accelerated Projected Gradient Method for Linear Inverse Problems with Sparsity Constraints
Regularization of ill-posed linear inverse problems via penalization
has been proposed for cases where the solution is known to be (almost) sparse.
One way to obtain the minimizer of such an penalized functional is via
an iterative soft-thresholding algorithm. We propose an alternative
implementation to -constraints, using a gradient method, with
projection on -balls. The corresponding algorithm uses again iterative
soft-thresholding, now with a variable thresholding parameter. We also propose
accelerated versions of this iterative method, using ingredients of the
(linear) steepest descent method. We prove convergence in norm for one of these
projected gradient methods, without and with acceleration.Comment: 24 pages, 5 figures. v2: added reference, some amendments, 27 page
Light Field Super-Resolution Via Graph-Based Regularization
Light field cameras capture the 3D information in a scene with a single
exposure. This special feature makes light field cameras very appealing for a
variety of applications: from post-capture refocus, to depth estimation and
image-based rendering. However, light field cameras suffer by design from
strong limitations in their spatial resolution, which should therefore be
augmented by computational methods. On the one hand, off-the-shelf single-frame
and multi-frame super-resolution algorithms are not ideal for light field data,
as they do not consider its particular structure. On the other hand, the few
super-resolution algorithms explicitly tailored for light field data exhibit
significant limitations, such as the need to estimate an explicit disparity map
at each view. In this work we propose a new light field super-resolution
algorithm meant to address these limitations. We adopt a multi-frame alike
super-resolution approach, where the complementary information in the different
light field views is used to augment the spatial resolution of the whole light
field. We show that coupling the multi-frame approach with a graph regularizer,
that enforces the light field structure via nonlocal self similarities, permits
to avoid the costly and challenging disparity estimation step for all the
views. Extensive experiments show that the new algorithm compares favorably to
the other state-of-the-art methods for light field super-resolution, both in
terms of PSNR and visual quality.Comment: This new version includes more material. In particular, we added: a
new section on the computational complexity of the proposed algorithm,
experimental comparisons with a CNN-based super-resolution algorithm, and new
experiments on a third datase
SNR Enhancement in Brillouin Microspectroscopy using Spectrum Reconstruction
Brillouin imaging suffers from intrinsically low signal-to-noise ratios
(SNR). Such low SNRs can render common data analysis protocols unreliable,
especially for SNRs below . In this work we exploit two denoising
algorithms, namely maximum entropy reconstruction (MER) and wavelet analysis
(WA), to improve the accuracy and precision in determination of Brillouin
shifts and linewidth. Algorithm performance is quantified using Monte-Carlo
simulations and benchmarked against the Cram\'er-Rao lower bound. Superior
estimation results are demonstrated even at low SNRS (). Denoising was
furthermore applied to experimental Brillouin spectra of distilled water at
room temperature, allowing the speed of sound in water to be extracted.
Experimental and theoretical values were found to be consistent to within
at unity SNR
Development Of A High Performance Mosaicing And Super-Resolution Algorithm
In this dissertation, a high-performance mosaicing and super-resolution algorithm is described. The scale invariant feature transform (SIFT)-based mosaicing algorithm builds an initial mosaic which is iteratively updated by the robust super resolution algorithm to achieve the final high-resolution mosaic. Two different types of datasets are used for testing: high altitude balloon data and unmanned aerial vehicle data. To evaluate our algorithm, five performance metrics are employed: mean square error, peak signal to noise ratio, singular value decomposition, slope of reciprocal singular value curve, and cumulative probability of blur detection. Extensive testing shows that the proposed algorithm is effective in improving the captured aerial data and the performance metrics are accurate in quantifying the evaluation of the algorithm
Collaborative patch-based super-resolution for diffusion-weighted images
In this paper, a new single image acquisition super-resolution method is proposed to increase image resolution of
diffusion weighted (DW) images. Based on a nonlocal patch-based strategy, the proposed method uses a
non-diffusion image (b0) to constrain the reconstruction of DW images. An extensive validation is presented
with a
gold standard
built on averaging 10 high-resolution DW acquis
itions. A comparison with classical interpo-
lation methods such as trilinear and B-spline demonstrates the competitive results of our proposed approach in
termsofimprovementsonimagereconstruction,fractiona
lanisotropy(FA)estimation,generalizedFAandangular
reconstruction for tensor and high angular resolut
ion diffusion imaging (HARDI) models. Besides,
fi
rst results of
reconstructed ultra high resolution DW
images are presented at 0.6 × 0.6 × 0.6 mm
3
and0.4×0.4×0.4mm
3
using our
gold standard
based on the average of 10 acquisitions, and on a single acquisition. Finally,
fi
ber tracking
results show the potential of the proposed super-resolution approach to accurately analyze white matter brain architecture.We thank the reviewers for their useful comments that helped improve the paper. We also want to thank the Pr Louis Collins for proofreading this paper and his fruitful comments. Finally, we want to thank Martine Bordessoules for her help during image acquisition of DWI used to build the phantom. This work has been supported by the French grant "HR-DTI" ANR-10-LABX-57 funded by the TRAIL from the French Agence Nationale de la Recherche within the context of the Investments for the Future program. This work has been also partially supported by the French National Agency for Research (Project MultImAD; ANR-09-MNPS-015-01) and by the Spanish grant TIN2011-26727 from the Ministerio de Ciencia e Innovacion. This work benefited from the use of FSL (http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/), FiberNavigator (code.google.com/p/fibernavigator/), MRtrix software (http://www. brain.org.au/software/mrtrix/) and ITKsnap (www.itk.org).Coupé, P.; Manjón Herrera, JV.; Chamberland, M.; Descoteaux, M.; Hiba, B. (2013). Collaborative patch-based super-resolution for diffusion-weighted images. NeuroImage. 83:245-261. https://doi.org/10.1016/j.neuroimage.2013.06.030S2452618
Super-Resolution of Positive Sources: the Discrete Setup
In single-molecule microscopy it is necessary to locate with high precision
point sources from noisy observations of the spectrum of the signal at
frequencies capped by , which is just about the frequency of natural
light. This paper rigorously establishes that this super-resolution problem can
be solved via linear programming in a stable manner. We prove that the quality
of the reconstruction crucially depends on the Rayleigh regularity of the
support of the signal; that is, on the maximum number of sources that can occur
within a square of side length about . The theoretical performance
guarantee is complemented with a converse result showing that our simple convex
program convex is nearly optimal. Finally, numerical experiments illustrate our
methods.Comment: 31 page, 7 figure
- …