17 research outputs found

    Underwater image restoration: super-resolution and deblurring via sparse representation and denoising by means of marine snow removal

    Get PDF
    Underwater imaging has been widely used as a tool in many fields, however, a major issue is the quality of the resulting images/videos. Due to the light's interaction with water and its constituents, the acquired underwater images/videos often suffer from a significant amount of scatter (blur, haze) and noise. In the light of these issues, this thesis considers problems of low-resolution, blurred and noisy underwater images and proposes several approaches to improve the quality of such images/video frames. Quantitative and qualitative experiments validate the success of proposed algorithms

    Online Super-Resolution For Fibre-Bundle-Based Confocal Laser Endomicroscopy

    Get PDF
    Probe-based Confocal Laser Endomicroscopy (pCLE) produces microscopic images enabling real-time in vivo optical biopsy. However, the miniaturisation of the optical hardware, specifically the reliance on an optical fibre bundle as an imaging guide, fundamentally limits image quality by producing artefacts, noise, and relatively low contrast and resolution. The reconstruction approaches in clinical pCLE products do not fully alleviate these problems. Consequently, image quality remains a barrier that curbs the full potential of pCLE. Enhancing the image quality of pCLE in real-time remains a challenge. The research in this thesis is a response to this need. I have developed dedicated online super-resolution methods that account for the physics of the image acquisition process. These methods have the potential to replace existing reconstruction algorithms without interfering with the fibre design or the hardware of the device. In this thesis, novel processing pipelines are proposed for enhancing the image quality of pCLE. First, I explored a learning-based super-resolution method that relies on mapping from the low to the high-resolution space. Due to the lack of high-resolution pCLE, I proposed to simulate high-resolution data and use it as a ground truth model that is based on the pCLE acquisition physics. However, pCLE images are reconstructed from irregularly distributed fibre signals, and grid-based Convolutional Neural Networks are not designed to take irregular data as input. To alleviate this problem, I designed a new trainable layer that embeds Nadaraya- Watson regression. Finally, I proposed a novel blind super-resolution approach by deploying unsupervised zero-shot learning accompanied by a down-sampling kernel crafted for pCLE. I evaluated these new methods in two ways: a robust image quality assessment and a perceptual quality test assessed by clinical experts. The results demonstrate that the proposed super-resolution pipelines are superior to the current reconstruction algorithm in terms of image quality and clinician preference

    New methods for deep dictionary learning and for image completion

    Get PDF
    Digital imaging plays an essential role in many aspects of our daily life. However due to the hardware limitations of the imaging devices, the image measurements are usually inpaired and require further processing to enhance the quality of the raw images in order to enable applications on the user side. Image enhancement aims to improve the information content within image measurements by exploiting the properties of the target image and the forward model of the imaging device. In this thesis, we aim to tackle two specific image enhancement problems, that is, single image super-resolution and image completion. First, we present a new Deep Analysis Dictionary Model (DeepAM) which consists of multiple layers of analysis dictionaries with associated soft-thresholding operators and a single layer of synthesis dictionary for single image super-resolution. To achieve an effective deep model, each analysis dictionary has been designed to be composed of an Information Preserving Analysis Dictionary (IPAD) which passes essential information from the input signal to output and a Clustering Analysis Dictionary (CAD) which generates discriminative feature representation. The parameters of the deep analysis dictionary model are optimized using a layer-wise learning strategy. We demonstrate that both the proposed deep dictionary design and the learning algorithm are effective. Simulation results show that the proposed method achieves comparable performance with Deep Neural Networks and other existing methods. We then generalize DeepAM to a Deep Convolutional Analysis Dictionary Model (DeepCAM) by learning convolutional dictionaries instead of unstructured dictionaries. The convolutional dictionary is more suitable for processing high-dimensional signals like images and has only a small number of free parameters. By exploiting the properties of a convolutional dictionary, we present an efficient convolutional analysis dictionary learning algorithm. The IPAD and the CAD parts are learned using variations of the proposed convolutional analysis dictionary learning algorithm. We demonstrate that DeepCAM is an effective multi-layer convolutional model and achieves better performance than DeepAM while using a smaller number of parameters. Finally, we present an image completion algorithm based on dense correspondence between the input image and an exemplar image retrieved from Internet which has been taken at a similar position. The dense correspondence which is estimated using a hierarchical PatchMatch algorithm is usually noisy and with a large occlusion area corresponding to the region to be completed. By modelling the dense correspondence as a smooth field, an Expectation-Maximization (EM) based method is presented to interpolate a smooth field over the occlusion area which is then used to transfer image content from the exemplar image to the input image. Color correction is further applied to diminish the possible color differences between the input image and the exemplar image. Numerical results demonstrate that the proposed image completion algorithm is able to achieve photo realistic image completion results.Open Acces

    Blind Image Denoising using Supervised and Unsupervised Learning

    Get PDF
    Image denoising is an important problem in image processing and computer vision. In real-world applications, denoising is often a pre-processing step (so-called low-level vision task) before image segmentation, object detection, and recognition at higher levels. Traditional image denoising algorithms often make idealistic assumptions with the noise (e.g., additive white Gaussian or Poisson). However, the noise in the real-world images such as high-ISO photos and microscopic fluorescence images are more complex. Accordingly, the performance of those traditional approaches degrades rapidly on real-world data. Such blind image denoising has remained an open problem in the literature. In this project, we report two competing approaches toward blind image denoising: supervised and unsupervised learning. We report the principles, performance, differences, merits, and technical potential of a few blind denoising algorithms. Supervised learning is a regression model like CNN with a large number of pairs of corrupted images and clean images. This feed-forward convolution neural network separates noise from the image. The reason for using CNN is its deep architecture for exploiting image characteristics, possible parallel computation with modern powerful GPU’s and advances in regularization and learning methods to train. The integration of residual learning and batch normalization is effective in speeding up the training and improving the denoising performance. Here we apply basic statistical reasoning to signaling reconstruction to map corrupted observations to clean targets Recently, few deep learning algorithms have been investigated that do not require ground truth training images. Noise2Noise is an unsupervised training method created for various applications including denoising with Gaussian, Poisson noise. In the N2N model, we observe that we can often learn to turn bad images to good images just by looking at bad images. An experimental study is conducted on practical properties of noisy-target training at performance levels close to using the clean target data. Further, Noise2Void(N2V) is a self-supervised method that takes one step further. This is method does not require clean image data nor noisy image data for training. It is directly trained on the current image that is to be denoised where other methods cannot do it. This is useful for datasets where we cannot find either a noisy dataset or a pair of clean images for training i.e., biomedical image data

    Single image super resolution for spatial enhancement of hyperspectral remote sensing imagery

    Get PDF
    Hyperspectral Imaging (HSI) has emerged as a powerful tool for capturing detailed spectral information across various applications, such as remote sensing, medical imaging, and material identification. However, the limited spatial resolution of acquired HSI data poses a challenge due to hardware and acquisition constraints. Enhancing the spatial resolution of HSI is crucial for improving image processing tasks, such as object detection and classification. This research focuses on utilizing Single Image Super Resolution (SISR) techniques to enhance HSI, addressing four key challenges: the efficiency of 3D Deep Convolutional Neural Networks (3D-DCNNs) in HSI enhancement, minimizing spectral distortions, tackling data scarcity, and improving state-of-the-art performance. The thesis establishes a solid theoretical foundation and conducts an in-depth literature review to identify trends, gaps, and future directions in the field of HSI enhancement. Four chapters present novel research targeting each of the aforementioned challenges. All experiments are performed using publicly available datasets, and the results are evaluated both qualitatively and quantitatively using various commonly used metrics. The findings of this research contribute to the development of a novel 3D-CNN architecture known as 3D Super Resolution CNN 333 (3D-SRCNN333). This architecture demonstrates the capability to enhance HSI with minimal spectral distortions while maintaining acceptable computational cost and training time. Furthermore, a Bayesian-optimized hybrid spectral spatial loss function is devised to improve the spatial quality and minimize spectral distortions, combining the best characteristics of both domains. Addressing the challenge of data scarcity, this thesis conducts a thorough study on Data Augmentation techniques and their impact on the spectral signature of HSI. A new Data Augmentation technique called CutMixBlur is proposed, and various combinations of Data Augmentation techniques are evaluated to address the data scarcity challenge, leading to notable enhancements in performance. Lastly, the 3D-SRCNN333 architecture is extended to the frequency domain and wavelet domain to explore their advantages over the spatial domain. The experiments reveal promising results with the 3D Complex Residual SRCNN (3D-CRSRCNN), surpassing the performance of 3D-SRCNN333. The findings presented in this thesis have been published in reputable conferences and journals, indicating their contribution to the field of HSI enhancement. Overall, this thesis provides valuable insights into the field of HSI-SISR, offering a thorough understanding of the advancements, challenges, and potential applications. The developed algorithms and methodologies contribute to the broader goal of improving the spatial resolution and spectral fidelity of HSI, paving the way for further advancements in scientific research and practical implementations.Hyperspectral Imaging (HSI) has emerged as a powerful tool for capturing detailed spectral information across various applications, such as remote sensing, medical imaging, and material identification. However, the limited spatial resolution of acquired HSI data poses a challenge due to hardware and acquisition constraints. Enhancing the spatial resolution of HSI is crucial for improving image processing tasks, such as object detection and classification. This research focuses on utilizing Single Image Super Resolution (SISR) techniques to enhance HSI, addressing four key challenges: the efficiency of 3D Deep Convolutional Neural Networks (3D-DCNNs) in HSI enhancement, minimizing spectral distortions, tackling data scarcity, and improving state-of-the-art performance. The thesis establishes a solid theoretical foundation and conducts an in-depth literature review to identify trends, gaps, and future directions in the field of HSI enhancement. Four chapters present novel research targeting each of the aforementioned challenges. All experiments are performed using publicly available datasets, and the results are evaluated both qualitatively and quantitatively using various commonly used metrics. The findings of this research contribute to the development of a novel 3D-CNN architecture known as 3D Super Resolution CNN 333 (3D-SRCNN333). This architecture demonstrates the capability to enhance HSI with minimal spectral distortions while maintaining acceptable computational cost and training time. Furthermore, a Bayesian-optimized hybrid spectral spatial loss function is devised to improve the spatial quality and minimize spectral distortions, combining the best characteristics of both domains. Addressing the challenge of data scarcity, this thesis conducts a thorough study on Data Augmentation techniques and their impact on the spectral signature of HSI. A new Data Augmentation technique called CutMixBlur is proposed, and various combinations of Data Augmentation techniques are evaluated to address the data scarcity challenge, leading to notable enhancements in performance. Lastly, the 3D-SRCNN333 architecture is extended to the frequency domain and wavelet domain to explore their advantages over the spatial domain. The experiments reveal promising results with the 3D Complex Residual SRCNN (3D-CRSRCNN), surpassing the performance of 3D-SRCNN333. The findings presented in this thesis have been published in reputable conferences and journals, indicating their contribution to the field of HSI enhancement. Overall, this thesis provides valuable insights into the field of HSI-SISR, offering a thorough understanding of the advancements, challenges, and potential applications. The developed algorithms and methodologies contribute to the broader goal of improving the spatial resolution and spectral fidelity of HSI, paving the way for further advancements in scientific research and practical implementations

    A review of spatial enhancement of hyperspectral remote sensing imaging techniques

    Get PDF
    Remote sensing technology has undeniable importance in various industrial applications, such as mineral exploration, plant detection, defect detection in aerospace and shipbuilding, and optical gas imaging, to name a few. Remote sensing technology has been continuously evolving, offering a range of image modalities that can facilitate the aforementioned applications. One such modality is Hyperspectral Imaging (HSI). Unlike Multispectral Images (MSI) and natural images, HSI consist of hundreds of bands. Despite their high spectral resolution, HSI suffer from low spatial resolution in comparison to their MSI counterpart, which hinders the utilization of their full potential. Therefore, spatial enhancement, or Super Resolution (SR), of HSI is a classical problem that has been gaining rapid attention over the past two decades. The literature is rich with various SR algorithms that enhance the spatial resolution of HSI while preserving their spectral fidelity. This paper reviews and discusses the most important algorithms relevant to this area of research between 2002-2022, along with the most frequently used datasets, HSI sensors, and quality metrics. Meta-analysis are drawn based on the aforementioned information, which is used as a foundation that summarizes the state of the field in a way that bridges the past and the present, identifies the current gap in it, and recommends possible future directions
    corecore