1,336 research outputs found

    Convolutional Deblurring for Natural Imaging

    Full text link
    In this paper, we propose a novel design of image deblurring in the form of one-shot convolution filtering that can directly convolve with naturally blurred images for restoration. The problem of optical blurring is a common disadvantage to many imaging applications that suffer from optical imperfections. Despite numerous deconvolution methods that blindly estimate blurring in either inclusive or exclusive forms, they are practically challenging due to high computational cost and low image reconstruction quality. Both conditions of high accuracy and high speed are prerequisites for high-throughput imaging platforms in digital archiving. In such platforms, deblurring is required after image acquisition before being stored, previewed, or processed for high-level interpretation. Therefore, on-the-fly correction of such images is important to avoid possible time delays, mitigate computational expenses, and increase image perception quality. We bridge this gap by synthesizing a deconvolution kernel as a linear combination of Finite Impulse Response (FIR) even-derivative filters that can be directly convolved with blurry input images to boost the frequency fall-off of the Point Spread Function (PSF) associated with the optical blur. We employ a Gaussian low-pass filter to decouple the image denoising problem for image edge deblurring. Furthermore, we propose a blind approach to estimate the PSF statistics for two Gaussian and Laplacian models that are common in many imaging pipelines. Thorough experiments are designed to test and validate the efficiency of the proposed method using 2054 naturally blurred images across six imaging applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin

    Development Of A High Performance Mosaicing And Super-Resolution Algorithm

    Get PDF
    In this dissertation, a high-performance mosaicing and super-resolution algorithm is described. The scale invariant feature transform (SIFT)-based mosaicing algorithm builds an initial mosaic which is iteratively updated by the robust super resolution algorithm to achieve the final high-resolution mosaic. Two different types of datasets are used for testing: high altitude balloon data and unmanned aerial vehicle data. To evaluate our algorithm, five performance metrics are employed: mean square error, peak signal to noise ratio, singular value decomposition, slope of reciprocal singular value curve, and cumulative probability of blur detection. Extensive testing shows that the proposed algorithm is effective in improving the captured aerial data and the performance metrics are accurate in quantifying the evaluation of the algorithm

    ANALYSIS OF IMAGE ENHANCEMENT ALGORITHMS FOR HYPERSPECTRAL IMAGES

    Get PDF
    This thesis presents an application of image enhancement techniques for color and panchromatic imagery to hyperspectral imagery. In this thesis, a combination of previously used algorithms for multi-channel images are used in a novel way to incorporate multiple bands within a single hyperspectral image. The steps of the image enhancement include image degradation, image correlation grouping, low-resolution image fusion, and fused image interpolation. Image degradation is accomplished through a Gaussian noise addition in each band along with image down-sampling. Image grouping is done through the use of two-dimensional correlation coefficients to match bands within the hyperspectral image. For image fusion, a discrete wavelet frame transform (DWFT) is used. For the interpolation, three methods are used to increase the resolution of the image: linear minimum mean squared error (LMMSE), a maximum entropy algorithm, and a regularized algorithm. These algorithms are then used in combination with a principal component analysis (PCA). The use of PCA is used for data compression. This saves time at the expense of increasing the error between the true image and the estimated hyperspectral image after PCA. Finally, a cost function is used to find the optimal level of compression to minimize the error while also decreasing computational time.Lieutenant Junior Grade, United States NavyApproved for public release. Distribution is unlimited

    Active Wavelength Selection for Chemical Identification Using Tunable Spectroscopy

    Get PDF
    Spectrometers are the cornerstone of analytical chemistry. Recent advances in microoptics manufacturing provide lightweight and portable alternatives to traditional spectrometers. In this dissertation, we developed a spectrometer based on Fabry-Perot interferometers (FPIs). A FPI is a tunable (it can only scan one wavelength at a time) optical filter. However, compared to its traditional counterparts such as FTIR (Fourier transform infrared spectroscopy), FPIs provide lower resolution and lower signal-noiseratio (SNR). Wavelength selection can help alleviate these drawbacks. Eliminating uninformative wavelengths not only speeds up the sensing process but also helps improve accuracy by avoiding nonlinearity and noise. Traditional wavelength selection algorithms follow a training-validation process, and thus they are only optimal for the target analyte. However, for chemical identification, the identities are unknown. To address the above issue, this dissertation proposes active sensing algorithms that select wavelengths online while sensing. These algorithms are able to generate analytedependent wavelengths. We envision this algorithm deployed on a portable chemical gas platform that has low-cost sensors and limited computation resources. We develop three algorithms focusing on three different aspects of the chemical identification problems. First, we consider the problem of single chemical identification. We formulate the problem as a typical classification problem where each chemical is considered as a distinct class. We use Bayesian risk as the utility function for wavelength selection, which calculates the misclassification cost between classes (chemicals), and we select the wavelength with the maximum reduction in the risk. We evaluate this approach on both synthesized and experimental data. The results suggest that active sensing outperforms the passive method, especially in a noisy environment. Second, we consider the problem of chemical mixture identification. Since the number of potential chemical mixtures grows exponentially as the number of components increases, it is intractable to formulate all potential mixtures as classes. To circumvent combinatorial explosion, we developed a multi-modal non-negative least squares (MMNNLS) method that searches multiple near-optimal solutions as an approximation of all the solutions. We project the solutions onto spectral space, calculate the variance of the projected spectra at each wavelength, and select the next wavelength using the variance as the guidance. We validate this approach on synthesized and experimental data. The results suggest that active approaches are superior to their passive counterparts especially when the condition number of the mixture grows larger (the analytes consist of more components, or the constituent spectra are very similar to each other). Third, we consider improving the computational speed for chemical mixture identification. MM-NNLS scales poorly as the chemical mixture becomes more complex. Therefore, we develop a wavelength selection method based on Gaussian process regression (GPR). GPR aims to reconstruct the spectrum rather than solving the mixture problem, thus, its computational cost is a function of the number of wavelengths. We evaluate the approach on both synthesized and experimental data. The results again demonstrate more accurate and robust performance in contrast to passive algorithms

    Applying neural networks for improving the MEG inverse solution

    Get PDF
    Magnetoencephalography (MEG) and electroencephalography (EEG) are appealing non-invasive methods for recording brain activity with high temporal resolution. However, locating the brain source currents from recordings picked up by the sensors on the scalp introduces an ill-posed inverse problem. The MEG inverse problem one of the most difficult inverse problems in medical imaging. The current standard in approximating the MEG inverse problem is to use multiple distributed inverse solutions – namely dSPM, sLORETA and L2 MNE – to estimate the source current distribution in the brain. This thesis investigates if these inverse solutions can be "post-processed" by a neural network to provide improved accuracy on source locations. Recently, deep neural networks have been used to approximate other ill-posed inverse medical imaging problems with accuracy comparable to current state-of- the-art inverse reconstruction algorithms. Neural networks are powerful tools for approximating problems with limited prior knowledge or problems that require high levels of abstraction. In this thesis a special case of a deep convolutional network, the U-Net, is applied to approximate the MEG inverse problem using the standard inverse solutions (dSPM, sLORETA and L2 MNE) as inputs. The U-Net is capable of learning non-linear relationships between the inputs and producing predictions about the site of single-dipole activation with higher accuracy than the L2 minimum-norm based inverse solutions with the following resolution metrics: dipole localization error (DLE), spatial dispersion (SD) and overall amplitude (OA). The U-Net model is stable and performs better in aforesaid resolution metrics than the inverse solutions with multi-dipole data previously unseen by the U-Net
    • …
    corecore