2,112 research outputs found
Convolutional Deblurring for Natural Imaging
In this paper, we propose a novel design of image deblurring in the form of
one-shot convolution filtering that can directly convolve with naturally
blurred images for restoration. The problem of optical blurring is a common
disadvantage to many imaging applications that suffer from optical
imperfections. Despite numerous deconvolution methods that blindly estimate
blurring in either inclusive or exclusive forms, they are practically
challenging due to high computational cost and low image reconstruction
quality. Both conditions of high accuracy and high speed are prerequisites for
high-throughput imaging platforms in digital archiving. In such platforms,
deblurring is required after image acquisition before being stored, previewed,
or processed for high-level interpretation. Therefore, on-the-fly correction of
such images is important to avoid possible time delays, mitigate computational
expenses, and increase image perception quality. We bridge this gap by
synthesizing a deconvolution kernel as a linear combination of Finite Impulse
Response (FIR) even-derivative filters that can be directly convolved with
blurry input images to boost the frequency fall-off of the Point Spread
Function (PSF) associated with the optical blur. We employ a Gaussian low-pass
filter to decouple the image denoising problem for image edge deblurring.
Furthermore, we propose a blind approach to estimate the PSF statistics for two
Gaussian and Laplacian models that are common in many imaging pipelines.
Thorough experiments are designed to test and validate the efficiency of the
proposed method using 2054 naturally blurred images across six imaging
applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin
Learning sparse representations of depth
This paper introduces a new method for learning and inferring sparse
representations of depth (disparity) maps. The proposed algorithm relaxes the
usual assumption of the stationary noise model in sparse coding. This enables
learning from data corrupted with spatially varying noise or uncertainty,
typically obtained by laser range scanners or structured light depth cameras.
Sparse representations are learned from the Middlebury database disparity maps
and then exploited in a two-layer graphical model for inferring depth from
stereo, by including a sparsity prior on the learned features. Since they
capture higher-order dependencies in the depth structure, these priors can
complement smoothness priors commonly used in depth inference based on Markov
Random Field (MRF) models. Inference on the proposed graph is achieved using an
alternating iterative optimization technique, where the first layer is solved
using an existing MRF-based stereo matching algorithm, then held fixed as the
second layer is solved using the proposed non-stationary sparse coding
algorithm. This leads to a general method for improving solutions of state of
the art MRF-based depth estimation algorithms. Our experimental results first
show that depth inference using learned representations leads to state of the
art denoising of depth maps obtained from laser range scanners and a time of
flight camera. Furthermore, we show that adding sparse priors improves the
results of two depth estimation methods: the classical graph cut algorithm by
Boykov et al. and the more recent algorithm of Woodford et al.Comment: 12 page
Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras
In this paper we present a new denoising method for the depth images of a 3D imaging sensor, based on the time-of-flight principle. We propose novel ways to use luminance-like information produced by a time-of flight camera along with depth images. Firstly, we propose a wavelet-based method for estimating the noise level in depth images, using luminance information. The underlying idea is that luminance carries information about the power of the optical signal reflected from the scene and is hence related to the signal-to-noise ratio for every pixel within the depth image. In this way, we can efficiently solve the difficult problem of estimating the non-stationary noise within the depth images. Secondly, we use luminance information to better restore object boundaries masked with noise in the depth images. Information from luminance images is introduced into the estimation formula through the use of fuzzy membership functions. In particular, we take the correlation between the measured depth and luminance into account, and the fact that edges (object boundaries) present in the depth image are likely to occur in the luminance image as well. The results on real 3D images show a significant improvement over the state-of-the-art in the field. (C) 2010 Optical Society of Americ
Dual modality optical coherence tomography : Technology development and biomedical applications
Optical coherence tomography (OCT) is a cross-sectional imaging modality that is widely used in clinical ophthalmology and interventional cardiology. It is highly promising for in situ characterization of tumor tissues. OCT has high spatial resolution and high imaging speed to assist clinical decision making in real-time.
OCT can be used in both structural imaging and mechanical characterization. Malignant tumor tissue alters morphology. Additionally, structural OCT imaging has limited tissue differentiation capability because of the complex and noisy nature of the OCT signal. Moreover, the contrast of structural OCT signal derived from tissue’s light scattering properties has little chemical specificity. Hence, interrogating additional tissue properties using OCT would improve the outcome of OCT’s clinical applications. In addition to morphological difference, pathological tissue such as cancer breast tissue usually possesses higher stiffness compared to the normal healthy tissue, which indicates a compelling reason for the specific combination of structural OCT imaging with stiffness assessment in the development of dual-modality OCT system for the characterization of the breast cancer diagnosis.
This dissertation seeks to integrate the structural OCT imaging and the optical coherence elastography (OCE) for breast cancer tissue characterization. OCE is a functional extension of OCT. OCE measures the mechanical response (deformation, resonant frequency, elastic wave propagation) of biological tissues under external or internal mechanical stimulation and extracts the mechanical properties of tissue related to its pathological and physiological processes. Conventional OCE techniques (i.e., compression, surface acoustic wave, magnetomotive OCE) measure the strain field and the results of OCE measurement are different under different loading conditions. Inconsistency is observed between OCE characterization results from different measurement sessions. Therefore, a robust mechanical characterization is required for force/stress quantification. A quantitative optical coherence elastography (qOCE) that tracks both force and displacement is proposed and developed at NJIT. qOCE instrument is based on a fiber optic probe integrated with a Fabry-Perot force sensor and the miniature probe can be delivered to arbitrary locations within animal or human body.
In this dissertation, the principle of qOCE technology is described. Experimental results are acquired to demonstrate the capability of qOCE in characterizing the elasticity of biological tissue. Moreover, a handheld optical instrument is developed to allow in vivo real-time OCE characterization based on an adaptive Doppler analysis algorithm to accurately track the motion of sample under compression.
For the development of the dual modality OCT system, the structural OCT images exhibit additive and multiplicative noises that degrade the image quality. To suppress noise in OCT imaging, a noise adaptive wavelet thresholding (NAWT) algorithm is developed to remove the speckle noise in OCT images. NAWT algorithm characterizes the speckle noise in the wavelet domain adaptively and removes the speckle noise while preserving the sample structure. Furthermore, a novel denoising algorithm is also developed that adaptively eliminates the additive noise from the complex OCT using Doppler variation analysis
- …