554 research outputs found
Laplace deconvolution and its application to Dynamic Contrast Enhanced imaging
In the present paper we consider the problem of Laplace deconvolution with
noisy discrete observations. The study is motivated by Dynamic Contrast
Enhanced imaging using a bolus of contrast agent, a procedure which allows
considerable improvement in {evaluating} the quality of a vascular network and
its permeability and is widely used in medical assessment of brain flows or
cancerous tumors. Although the study is motivated by medical imaging
application, we obtain a solution of a general problem of Laplace deconvolution
based on noisy data which appears in many different contexts. We propose a new
method for Laplace deconvolution which is based on expansions of the
convolution kernel, the unknown function and the observed signal over Laguerre
functions basis. The expansion results in a small system of linear equations
with the matrix of the system being triangular and Toeplitz. The number of
the terms in the expansion of the estimator is controlled via complexity
penalty. The advantage of this methodology is that it leads to very fast
computations, does not require exact knowledge of the kernel and produces no
boundary effects due to extension at zero and cut-off at . The technique
leads to an estimator with the risk within a logarithmic factor of of the
oracle risk under no assumptions on the model and within a constant factor of
the oracle risk under mild assumptions. The methodology is illustrated by a
finite sample simulation study which includes an example of the kernel obtained
in the real life DCE experiments. Simulations confirm that the proposed
technique is fast, efficient, accurate, usable from a practical point of view
and competitive
Understanding Kernel Size in Blind Deconvolution
Most blind deconvolution methods usually pre-define a large kernel size to
guarantee the support domain. Blur kernel estimation error is likely to be
introduced, yielding severe artifacts in deblurring results. In this paper, we
first theoretically and experimentally analyze the mechanism to estimation
error in oversized kernel, and show that it holds even on blurry images without
noises. Then to suppress this adverse effect, we propose a low rank-based
regularization on blur kernel to exploit the structural information in degraded
kernels, by which larger-kernel effect can be effectively suppressed. And we
propose an efficient optimization algorithm to solve it. Experimental results
on benchmark datasets show that the proposed method is comparable with the
state-of-the-arts by accordingly setting proper kernel size, and performs much
better in handling larger-size kernels quantitatively and qualitatively. The
deblurring results on real-world blurry images further validate the
effectiveness of the proposed method.Comment: Accepted by WACV 201
Laplace deconvolution on the basis of time domain data and its application to Dynamic Contrast Enhanced imaging
In the present paper we consider the problem of Laplace deconvolution with
noisy discrete non-equally spaced observations on a finite time interval. We
propose a new method for Laplace deconvolution which is based on expansions of
the convolution kernel, the unknown function and the observed signal over
Laguerre functions basis (which acts as a surrogate eigenfunction basis of the
Laplace convolution operator) using regression setting. The expansion results
in a small system of linear equations with the matrix of the system being
triangular and Toeplitz. Due to this triangular structure, there is a common
number of terms in the function expansions to control, which is realized
via complexity penalty. The advantage of this methodology is that it leads to
very fast computations, produces no boundary effects due to extension at zero
and cut-off at and provides an estimator with the risk within a logarithmic
factor of the oracle risk. We emphasize that, in the present paper, we consider
the true observational model with possibly nonequispaced observations which are
available on a finite interval of length which appears in many different
contexts, and account for the bias associated with this model (which is not
present when ). The study is motivated by perfusion imaging
using a short injection of contrast agent, a procedure which is applied for
medical assessment of micro-circulation within tissues such as cancerous
tumors. Presence of a tuning parameter allows to choose the most
advantageous time units, so that both the kernel and the unknown right hand
side of the equation are well represented for the deconvolution. The
methodology is illustrated by an extensive simulation study and a real data
example which confirms that the proposed technique is fast, efficient,
accurate, usable from a practical point of view and very competitive.Comment: 36 pages, 9 figures. arXiv admin note: substantial text overlap with
arXiv:1207.223
Rapid deconvolution of low-resolution time-of-flight data using Bayesian inference
The deconvolution of low-resolution time-of-flight data has numerous advantages, including the ability to extract additional information from the experimental data. We augment the well-known Lucy-Richardson deconvolution algorithm using various Bayesian prior distributions and show that a prior of second-differences of the signal outperforms the standard Lucy-Richardson algorithm, accelerating the rate of convergence by more than a factor of four, while preserving the peak amplitude ratios of a similar fraction of the total peaks. A novel stopping criterion and boosting mechanism are implemented to ensure that these methods converge to a similar final entropy and local minima are avoided. Improvement by a factor of two in mass resolution allows more accurate quantification of the spectra. The general method is demonstrated in this paper through the deconvolution of fragmentation peaks of the 2,5-dihydroxybenzoic acid matrix and the benzyltriphenylphosphonium thermometer ion, following femtosecond ultraviolet laser desorption
On the Inversion of High Energy Proton
Inversion of the K-fold stochastic autoconvolution integral equation is an
elementary nonlinear problem, yet there are no de facto methods to solve it
with finite statistics. To fix this problem, we introduce a novel inverse
algorithm based on a combination of minimization of relative entropy, the Fast
Fourier Transform and a recursive version of Efron's bootstrap. This gives us
power to obtain new perspectives on non-perturbative high energy QCD, such as
probing the ab initio principles underlying the approximately negative binomial
distributions of observed charged particle final state multiplicities, related
to multiparton interactions, the fluctuating structure and profile of proton
and diffraction. As a proof-of-concept, we apply the algorithm to ALICE
proton-proton charged particle multiplicity measurements done at different
center-of-mass energies and fiducial pseudorapidity intervals at the LHC,
available on HEPData. A strong double peak structure emerges from the
inversion, barely visible without it.Comment: 29 pages, 10 figures, v2: extended analysis (re-projection ratios,
2D
Fast evaluation of the Rayleigh integral and applications to inverse acoustics
In this paper we present a fast evaluation of the Rayleigh integral, which leads to fast and robust solutions in inverse acoustics. The method commonly used to reconstruct acoustic sources on a plane in space is Planar Nearfield Acoustic Holography (PNAH). Some of the most important recent improvements in PNAH address the alleviation of spatial windowing effects that arise due to the application of a Fast Fourier Transform to a finite spatial measurement grid. Although these improvements have led to an increase in the accuracy of the method, errors such as leakage and edge degradation can not be removed completely. Such errors do not occur when numerical models such as the Boundary Element Method (BEM) are used.Moreover, the forward models involved converge to the exact solution as the number of elements tends to infinity. However, the time and computer memory needed to solve these problems up to an acceptable accuracy is large. We present a fast (O(n log n) per iteration) and memory efficient (O(n)) solution to the planar acoustic problem by exploiting the fact that the transfer matrix associated with a numerical implementation of the Rayleigh integral is Toeplitz. In this paper we will address both the fundamentals of the method and its application in inverse acoustics. Special attention will be paid to comparison between experimental results from PNAH, IBEM and the proposed method
Truncated decompositions and filtering methods with Reflective/Anti-Reflective boundary conditions: a comparison
The paper analyzes and compares some spectral filtering methods as truncated
singular/eigen-value decompositions and Tikhonov/Re-blurring regularizations in
the case of the recently proposed Reflective [M.K. Ng, R.H. Chan, and W.C.
Tang, A fast algorithm for deblurring models with Neumann boundary conditions,
SIAM J. Sci. Comput., 21 (1999), no. 3, pp.851-866] and Anti-Reflective [S.
Serra Capizzano, A note on anti-reflective boundary conditions and fast
deblurring models, SIAM J. Sci. Comput., 25-3 (2003), pp. 1307-1325] boundary
conditions. We give numerical evidence to the fact that spectral decompositions
(SDs) provide a good image restoration quality and this is true in particular
for the Anti-Reflective SD, despite the loss of orthogonality in the associated
transform. The related computational cost is comparable with previously known
spectral decompositions, and results substantially lower than the singular
value decomposition. The model extension to the cross-channel blurring
phenomenon of color images is also considered and the related spectral
filtering methods are suitably adapted.Comment: 22 pages, 10 figure
- …