473 research outputs found
Blind Deconvolution of Ultrasonic Signals Using High-Order Spectral Analysis and Wavelets
Defect detection by ultrasonic method is limited by the pulse width.
Resolution can be improved through a deconvolution process with a priori
information of the pulse or by its estimation. In this paper a regularization
of the Wiener filter using wavelet shrinkage is presented for the estimation of
the reflectivity function. The final result shows an improved signal to noise
ratio with better axial resolution.Comment: 8 pages, CIARP 2005, LNCS 377
Inverse Problems in Image Processing: Blind Image Restoration
Blind Image Restoration pertains to the estimation of degradation in an image, without any prior knowledge of the degradation system, and using this estimation to help restore the original image. Original Image, in this case, refers to that version of the image before it experienced degradation. In this thesis, after estimating the degradation system in the form of Gaussian blur and noise, we employ Deconvolution to help restore the original image.
In this thesis, we use a Redundant Wavelet based technique to estimate blur in the image using high-frequency information in the image itself. Lipschitz exponent – a measure of local regularity of signals, is computed using the evolution of wavelet coefficients of singularities across scales. It has been shown before that this exponent is related to the blur in the image and we use it in this case to estimate the standard deviation of the Gaussian blur. The properties of wavelets enable us to compute the noise variance in the image. In this thesis, we employ two cases of deconvolution – A strictly Fourier domain Regularized Iterative Wiener filtering approach and A Fourier-Wavelet Cascaded approach with Regularized Iterative Wiener filtering - to compute an estimate of the image to be restored using the blur and noise variance information that was earlier computed.
The estimated value of standard deviation of the blur helped obtain robust estimates with deconvolution. It can be observed from the results that Fourier domain Regularized Iterative Wiener filtering provides a more stable output estimate than the Iterative Filtering with Additive Correction methods, especially when the number of iterations employed is more. The Fourier-Wavelet Cascaded deconvolution seems to be image dependent with regards to performance although it outperforms the strictly Fourier domain deconvolution approach in some cases, as can be gauged from the visual quality and Mean Squared Error
Convolutional Deblurring for Natural Imaging
In this paper, we propose a novel design of image deblurring in the form of
one-shot convolution filtering that can directly convolve with naturally
blurred images for restoration. The problem of optical blurring is a common
disadvantage to many imaging applications that suffer from optical
imperfections. Despite numerous deconvolution methods that blindly estimate
blurring in either inclusive or exclusive forms, they are practically
challenging due to high computational cost and low image reconstruction
quality. Both conditions of high accuracy and high speed are prerequisites for
high-throughput imaging platforms in digital archiving. In such platforms,
deblurring is required after image acquisition before being stored, previewed,
or processed for high-level interpretation. Therefore, on-the-fly correction of
such images is important to avoid possible time delays, mitigate computational
expenses, and increase image perception quality. We bridge this gap by
synthesizing a deconvolution kernel as a linear combination of Finite Impulse
Response (FIR) even-derivative filters that can be directly convolved with
blurry input images to boost the frequency fall-off of the Point Spread
Function (PSF) associated with the optical blur. We employ a Gaussian low-pass
filter to decouple the image denoising problem for image edge deblurring.
Furthermore, we propose a blind approach to estimate the PSF statistics for two
Gaussian and Laplacian models that are common in many imaging pipelines.
Thorough experiments are designed to test and validate the efficiency of the
proposed method using 2054 naturally blurred images across six imaging
applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin
An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
We consider linear inverse problems where the solution is assumed to have a
sparse expansion on an arbitrary pre-assigned orthonormal basis. We prove that
replacing the usual quadratic regularizing penalties by weighted l^p-penalties
on the coefficients of such expansions, with 1 < or = p < or =2, still
regularizes the problem. If p < 2, regularized solutions of such l^p-penalized
problems will have sparser expansions, with respect to the basis under
consideration. To compute the corresponding regularized solutions we propose an
iterative algorithm that amounts to a Landweber iteration with thresholding (or
nonlinear shrinkage) applied at each iteration step. We prove that this
algorithm converges in norm. We also review some potential applications of this
method.Comment: 30 pages, 3 figures; this is version 2 - changes with respect to v1:
small correction in proof (but not statement of) lemma 3.15; description of
Besov spaces in intro and app A clarified (and corrected); smaller pointsize
(making 30 instead of 38 pages
Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization
We study the question of reconstructing two signals and from their
convolution . This problem, known as {\em blind deconvolution},
pervades many areas of science and technology, including astronomy, medical
imaging, optics, and wireless communications. A key challenge of this intricate
non-convex optimization problem is that it might exhibit many local minima. We
present an efficient numerical algorithm that is guaranteed to recover the
exact solution, when the number of measurements is (up to log-factors) slightly
larger than the information-theoretical minimum, and under reasonable
conditions on and . The proposed regularized gradient descent algorithm
converges at a geometric rate and is provably robust in the presence of noise.
To the best of our knowledge, our algorithm is the first blind deconvolution
algorithm that is numerically efficient, robust against noise, and comes with
rigorous recovery guarantees under certain subspace conditions. Moreover,
numerical experiments do not only provide empirical verification of our theory,
but they also demonstrate that our method yields excellent performance even in
situations beyond our theoretical framework
A hybrid algorithm for spatial and wavelet domain image restoration
The recent algorithm ForWaRD based on the two steps: (i) the Fourier domain deblurring and (ii) wavelet domain denoising, shows better restoration results than those using traditional image restoration methods. In this paper, we study other deblurring schemes in ForWaRD and demonstrate such two-step approach is effective for image restoration.published_or_final_versionS P I E Conference on Visual Communications and Image Processing 2005, Beijing, China, 12-15 July 2005. In Proceedings Of Spie - The International Society For Optical Engineering, 2005, v. 5960 n. 4, p. 59605V-1 - 59605V-
An Iterative Shrinkage Approach to Total-Variation Image Restoration
The problem of restoration of digital images from their degraded measurements
plays a central role in a multitude of practically important applications. A
particularly challenging instance of this problem occurs in the case when the
degradation phenomenon is modeled by an ill-conditioned operator. In such a
case, the presence of noise makes it impossible to recover a valuable
approximation of the image of interest without using some a priori information
about its properties. Such a priori information is essential for image
restoration, rendering it stable and robust to noise. Particularly, if the
original image is known to be a piecewise smooth function, one of the standard
priors used in this case is defined by the Rudin-Osher-Fatemi model, which
results in total variation (TV) based image restoration. The current arsenal of
algorithms for TV-based image restoration is vast. In the present paper, a
different approach to the solution of the problem is proposed based on the
method of iterative shrinkage (aka iterated thresholding). In the proposed
method, the TV-based image restoration is performed through a recursive
application of two simple procedures, viz. linear filtering and soft
thresholding. Therefore, the method can be identified as belonging to the group
of first-order algorithms which are efficient in dealing with images of
relatively large sizes. Another valuable feature of the proposed method
consists in its working directly with the TV functional, rather then with its
smoothed versions. Moreover, the method provides a single solution for both
isotropic and anisotropic definitions of the TV functional, thereby
establishing a useful connection between the two formulae.Comment: The paper was submitted to the IEEE Transactions on Image Processing
on October 22nd, 200
- …