19 research outputs found
Compressive Imaging via Approximate Message Passing with Image Denoising
We consider compressive imaging problems, where images are reconstructed from
a reduced number of linear measurements. Our objective is to improve over
existing compressive imaging algorithms in terms of both reconstruction error
and runtime. To pursue our objective, we propose compressive imaging algorithms
that employ the approximate message passing (AMP) framework. AMP is an
iterative signal reconstruction algorithm that performs scalar denoising at
each iteration; in order for AMP to reconstruct the original input signal well,
a good denoiser must be used. We apply two wavelet based image denoisers within
AMP. The first denoiser is the "amplitude-scaleinvariant Bayes estimator"
(ABE), and the second is an adaptive Wiener filter; we call our AMP based
algorithms for compressive imaging AMP-ABE and AMP-Wiener. Numerical results
show that both AMP-ABE and AMP-Wiener significantly improve over the state of
the art in terms of runtime. In terms of reconstruction quality, AMP-Wiener
offers lower mean square error (MSE) than existing compressive imaging
algorithms. In contrast, AMP-ABE has higher MSE, because ABE does not denoise
as well as the adaptive Wiener filter.Comment: 15 pages; 2 tables; 7 figures; to appear in IEEE Trans. Signal
Proces
Gamma regularization based reconstruction for low dose CT
International audienceReducing the radiation in computerized tomography is today a major concern in radiology. Low dose computerized tomography (LDCT) offers a sound way to deal with this problem. However, more severe noise in the reconstructed CT images is observed under low dose scan protocols (e.g. lowered tube current or voltage values). In this paper we propose a Gamma regularization based algorithm for LDCT image reconstruction. This solution provides a good balance between the regularizations based on l 0-norm and l 1-norm. We evaluate the proposed approach using the projection data from simulated phantoms and scanned Catphan phantoms. Qualitative and quantitative results show that the Gamma regularization based reconstruction can perform better in both edge-preserving and noise suppression when compared with other regularizations using integer norms
Resolution Improvement for OpticalCoherence Tomography based on Sparse Continuous Deconvolution
We propose an image resolution improvement method for optical coherence
tomography (OCT) based on sparse continuous deconvolution. Traditional
deconvolution techniques such as Lucy-Richardson deconvolution suffers from the
artifact convergence problem after a small number of iterations, which brings
limitation to practical applications. In this work, we take advantage of the
prior knowledge about the sample sparsity and continuity to constrain the
deconvolution iteration. Sparsity is used to achieve the resolution improvement
through the resolution preserving regularization term. And the continuity based
on the correlation of the grayscale values in different directions is
introduced to mitigate excessive image sparsity and noise reduction through the
continuity regularization term. The Bregman splitting technique is then used to
solve the resulting optimization problem. Both the numerical simulation study
and experimental study on phantoms and biological samples show that our method
can suppress artefacts of traditional deconvolution techniques effectively.
Meanwhile, clear resolution improvement is demonstrated. It achieved nearly
twofold resolution improvement for phantom beads image that can be
quantitatively evaluate
Performance Analysis on Stereo Matching Algorithms Based on Local and Global Methods for 3D Images Application
Stereo matching is one of the methods in computer vision and image processing. There have numerous algorithms that have been found associated between disparity maps and ground truth data. Stereo Matching Algorithms were applied to obtain high accuracy of the depth as well as reducing the computational cost of the stereo image or video. The smoother the disparity depth map, the better results of triangulation can be achieved. The selection of an appropriate set of stereo data is very important because these stereo pairs have different characteristics. This paper discussed the performance analysis on stereo matching algorithm through Peak Signal to Noise Ratio (PSNR in dB), Structural Similarity (SSIM), the effect of window size and execution time for different type of techniques such as Sum Absolute Differences (SAD), Sum Square Differences (SSD), Normalized Cross Correlation (NCC), Block Matching (BM), Global Error Energy Minimization by Smoothing Functions, Adapting BP and Dynamic Programming (DP). The dataset of stereo images that used for the experimental purpose is obtained from Middlebury Stereo Datasets