4 research outputs found
Image interpolation using Shearlet based iterative refinement
This paper proposes an image interpolation algorithm exploiting sparse
representation for natural images. It involves three main steps: (a) obtaining
an initial estimate of the high resolution image using linear methods like FIR
filtering, (b) promoting sparsity in a selected dictionary through iterative
thresholding, and (c) extracting high frequency information from the
approximation to refine the initial estimate. For the sparse modeling, a
shearlet dictionary is chosen to yield a multiscale directional representation.
The proposed algorithm is compared to several state-of-the-art methods to
assess its objective as well as subjective performance. Compared to the cubic
spline interpolation method, an average PSNR gain of around 0.8 dB is observed
over a dataset of 200 images
Sparse and Redundant Representations for Inverse Problems and Recognition
Sparse and redundant representation of data enables the
description of signals as linear combinations of a few atoms from
a dictionary. In this dissertation, we study applications of
sparse and redundant representations in inverse problems and
object recognition. Furthermore, we propose two novel imaging
modalities based on the recently introduced theory of Compressed
Sensing (CS).
This dissertation consists of four major parts. In the first part
of the dissertation, we study a new type of deconvolution
algorithm that is based on estimating the image from a shearlet
decomposition. Shearlets provide a multi-directional and
multi-scale decomposition that has been mathematically shown to
represent distributed discontinuities such as edges better than
traditional wavelets. We develop a deconvolution algorithm that
allows for the approximation inversion operator to be controlled
on a multi-scale and multi-directional basis. Furthermore, we
develop a method for the automatic determination of the threshold
values for the noise shrinkage for each scale and direction
without explicit knowledge of the noise variance using a
generalized cross validation method.
In the second part of the dissertation, we study a reconstruction
method that recovers highly undersampled images assumed to have a
sparse representation in a gradient domain by using partial
measurement samples that are collected in the Fourier domain. Our
method makes use of a robust generalized Poisson solver that
greatly aids in achieving a significantly improved performance
over similar proposed methods. We will demonstrate by experiments
that this new technique is more flexible to work with either
random or restricted sampling scenarios better than its
competitors.
In the third part of the dissertation, we introduce a novel
Synthetic Aperture Radar (SAR) imaging modality which can provide
a high resolution map of the spatial distribution of targets and
terrain using a significantly reduced number of needed transmitted
and/or received electromagnetic waveforms. We demonstrate that
this new imaging scheme, requires no new hardware components and
allows the aperture to be compressed. Also, it
presents many new applications and advantages which include strong
resistance to countermesasures and interception, imaging much
wider swaths and reduced on-board storage requirements.
The last part of the dissertation deals with object recognition
based on learning dictionaries for simultaneous sparse signal
approximations and feature extraction. A dictionary is learned
for each object class based on given training examples which
minimize the representation error with a sparseness constraint. A
novel test image is then projected onto the span of the atoms in
each learned dictionary. The residual vectors along with the
coefficients are then used for recognition. Applications to
illumination robust face recognition and automatic target
recognition are presented
Smart Nanoscopy: A Review of Computational Approaches to Achieve Super-Resolved Optical Microscopy
The field of optical nanoscopy , a paradigm referring to the recent cutting-edge developments aimed at surpassing the widely acknowledged 200nm-diffraction limit in traditional optical microscopy, has gained recent prominence & traction in the 21st century. Numerous optical implementations allowing for a new frontier in traditional confocal laser scanning fluorescence microscopy to be explored (termed super-resolution fluorescence microscopy ) have been realized through the development of techniques such as stimulated emission and depletion (STED) microscopy, photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM), amongst others. Nonetheless, it would be apt to mention at this juncture that optical nanoscopy has been explored since the mid-late 20th century, through several computational techniques such as deblurring and deconvolution algorithms. In this review, we take a step back in the field, evaluating the various in silico methods used to achieve optical nanoscopy today, ranging from traditional deconvolution algorithms (such as the Nearest Neighbors algorithm) to the latest developments in the field of computational nanoscopy, founded on artificial intelligence (AI). An insight is provided into some of the commercial applications of AI-based super-resolution imaging, prior to delving into the potentially promising future implications of computational nanoscopy. This is facilitated by recent advancements in the field of AI, deep learning (DL) and convolutional neural network (CNN) architectures, coupled with the growing size of data sources and rapid improvements in computing hardware, such as multi-core CPUs & GPUs, low-latency RAM and hard-drive capacitie