43 research outputs found
Mapping and Deep Analysis of Image Dehazing: Coherent Taxonomy, Datasets, Open Challenges, Motivations, and Recommendations
Our study aims to review and analyze the most relevant studies in the image dehazing field. Many aspects have been deemed necessary to provide a broad understanding of various studies that have been examined through surveying the existing literature. These aspects are as follows: datasets that have been used in the literature, challenges that other researchers have faced, motivations, and recommendations for diminishing the obstacles in the reported literature. A systematic protocol is employed to search all relevant articles on image dehazing, with variations in keywords, in addition to searching for evaluation and benchmark studies. The search process is established on three online databases, namely, IEEE Xplore, Web of Science (WOS), and ScienceDirect (SD), from 2008 to 2021. These indices are selected because they are sufficient in terms of coverage. Along with definition of the inclusion and exclusion criteria, we include 152 articles to the final set. A total of 55 out of 152 articles focused on various studies that conducted image dehazing, and 13 out 152 studies covered most of the review papers based on scenarios and general overviews. Finally, most of the included articles centered on the development of image dehazing algorithms based on real-time scenario (84/152) articles. Image dehazing removes unwanted visual effects and is often considered an image enhancement technique, which requires a fully automated algorithm to work under real-time outdoor applications, a reliable evaluation method, and datasets based on different weather conditions. Many relevant studies have been conducted to meet these critical requirements. We conducted objective image quality assessment experimental comparison of various image dehazing algorithms. In conclusions unlike other review papers, our study distinctly reflects different observations on image dehazing areas. We believe that the result of this study can serve as a useful guideline for practitioners who are looking for a comprehensive view on image dehazing
Adaptive Deep Learning Detection Model for Multi-Foggy Images
The fog has different features and effects within every single environment. Detection whether there is fog in the image is considered a challenge and giving the type of fog has a substantial enlightening effect on image defogging. Foggy scenes have different types such as scenes based on fog density level and scenes based on fog type. Machine learning techniques have a significant contribution to the detection of foggy scenes. However, most of the existing detection models are based on traditional machine learning models, and only a few studies have adopted deep learning models. Furthermore, most of the existing machines learning detection models are based on fog density-level scenes. However, to the best of our knowledge, there is no such detection model based on multi-fog type scenes have presented yet. Therefore, the main goal of our study is to propose an adaptive deep learning model for the detection of multi-fog types of images. Moreover, due to the lack of a publicly available dataset for inhomogeneous, homogenous, dark, and sky foggy scenes, a dataset for multi-fog scenes is presented in this study (https://github.com/Karrar-H-Abdulkareem/Multi-Fog-Dataset). Experiments were conducted in three stages. First, the data collection phase is based on eight resources to obtain the multi-fog scene dataset. Second, a classification experiment is conducted based on the ResNet-50 deep learning model to obtain detection results. Third, evaluation phase where the performance of the ResNet-50 detection model has been compared against three different models. Experimental results show that the proposed model has presented a stable classification performance for different foggy images with a 96% score for each of Classification Accuracy Rate (CAR), Recall, Precision, F1-Score which has specific theoretical and practical significance. Our proposed model is suitable as a pre-processing step and might be considered in different real-time applications
Fast single image defogging with robust sky detection
Haze is a source of unreliability for computer vision applications in outdoor scenarios, and it is usually caused by atmospheric conditions. The Dark Channel Prior (DCP) has shown remarkable results in image defogging with three main limitations: 1) high time-consumption, 2) artifact generation, and 3) sky-region over-saturation. Therefore, current work has focused on improving processing time without losing restoration quality and avoiding image artifacts during image defogging. Hence in this research, a novel methodology based on depth approximations through DCP, local Shannon entropy, and Fast Guided Filter is proposed for reducing artifacts and improving image recovery on sky regions with low computation time. The proposed-method performance is assessed using more than 500 images from three datasets: Hybrid Subjective Testing Set from Realistic Single Image Dehazing (HSTS-RESIDE), the Synthetic Objective Testing Set from RESIDE (SOTS-RESIDE) and the HazeRD. Experimental results demonstrate that the proposed approach has an outstanding performance over state-of-the-art methods in reviewed literature, which is validated qualitatively and quantitatively through Peak Signal-to-Noise Ratio (PSNR), Naturalness Image Quality Evaluator (NIQE) and Structural SIMilarity (SSIM) index on retrieved images, considering different visual ranges, under distinct illumination and contrast conditions. Analyzing images with various resolutions, the method proposed in this work shows the lowest processing time under similar software and hardware conditions.This work was supported in part by the Centro en Investigaciones en Óptica (CIO) and the Consejo Nacional de Ciencia y Tecnología (CONACYT), and in part by the Barcelona Supercomputing Center.Peer ReviewedPostprint (published version
Visibility recovery on images acquired in attenuating media. Application to underwater, fog, and mammographic imaging
136 p.When acquired in attenuating media, digital images of ten suffer from a particularly complex degradation that reduces their visual quality, hindering their suitability for further computational applications, or simply decreasing the visual pleasan tness for the user. In these cases, mathematical image processing reveals it self as an ideal tool to recover some of the information lost during the degradation process. In this dissertation,we deal with three of such practical scenarios in which this problematic is specially relevant, namely, underwater image enhancement, fogremoval and mammographic image processing. In the case of digital mammograms,X-ray beams traverse human tissue, and electronic detectorscapture them as they reach the other side. However, the superposition on a bidimensional image of three-dimensional structures produces low contraste dimages in which structures of interest suffer from a diminished visibility, obstructing diagnosis tasks. Regarding fog removal, the loss of contrast is produced by the atmospheric conditions, and white colour takes over the scene uniformly as distance increases, also reducing visibility.For underwater images, there is an added difficulty, since colour is not lost uniformly; instead, red colours decay the fastest, and green and blue colours typically dominate the acquired images. To address all these challenges,in this dissertation we develop new methodologies that rely on: a)physical models of the observed degradation, and b) the calculus of variations.Equipped with this powerful machinery, we design novel theoreticaland computational tools, including image-dependent functional energies that capture the particularities of each degradation model. These energie sare composed of different integral terms that are simultaneous lyminimized by means of efficient numerical schemes, producing a clean,visually-pleasant and use ful output image, with better contrast and increased visibility. In every considered application, we provide comprehensive qualitative (visual) and quantitative experimental results to validateour methods, confirming that the developed techniques out perform other existing approaches in the literature
Fast Video Dehazing Using Per-Pixel Minimum Adjustment
To reduce the computational complexity and maintain the effect of video dehazing, a fast and accurate video dehazing method is presented. The preliminary transmission map is estimated by the minimum channel of each pixel. An adjustment parameter is designed to fix the transmission map to reduce color distortion in the sky area. We propose a new quad-tree method to estimate the atmospheric light. In video dehazing stage, we keep the atmospheric light unchanged in the same scene by a simple but efficient parameter, which describes the similarity of the interframe image content. By using this method, unexpected flickers are effectively eliminated. Experiments results show that the proposed algorithm greatly improved the efficiency of video dehazing and avoided halos and block effect
impact of dehazing on underwater marker detection for augmented reality
Underwater augmented reality is a very challenging task and amongst several issues, one of the most crucial aspects involves real-time tracking. Particles present in water combined with the uneven absorption of light decrease the visibility in the underwater environment. Dehazing methods are used in many areas to improve the quality of digital image data that is degraded by the influence of the environment. This paper describes the visibility conditions affecting underwater scenes and shows existing dehazing techniques that successfully improve the quality of underwater images. Four underwater dehazing methods are selected for evaluation of their capability of improving the success of square marker detection in underwater videos. Two reviewed methods represent approaches of image restoration: Multi-Scale Fusion, and Bright Channel Prior. Another two methods evaluated, the Automatic Color Enhancement and the Screened Poisson Equation, are methods of image enhancement. The evaluation uses diverse test data set to evaluate different environmental conditions. Results of the evaluation show an increased number of successful marker detections in videos pre-processed by dehazing algorithms and evaluate the performance of each compared method. The Screened Poisson method performs slightly better to other methods across various tested environments, while Bright Channel Prior and Automatic Color Enhancement shows similarly positive results
Uni-Removal: A Semi-Supervised Framework for Simultaneously Addressing Multiple Degradations in Real-World Images
Removing multiple degradations, such as haze, rain, and blur, from real-world
images poses a challenging and illposed problem. Recently, unified models that
can handle different degradations have been proposed and yield promising
results. However, these approaches focus on synthetic images and experience a
significant performance drop when applied to realworld images. In this paper,
we introduce Uni-Removal, a twostage semi-supervised framework for addressing
the removal of multiple degradations in real-world images using a unified model
and parameters. In the knowledge transfer stage, Uni-Removal leverages a
supervised multi-teacher and student architecture in the knowledge transfer
stage to facilitate learning from pretrained teacher networks specialized in
different degradation types. A multi-grained contrastive loss is introduced to
enhance learning from feature and image spaces. In the domain adaptation stage,
unsupervised fine-tuning is performed by incorporating an adversarial
discriminator on real-world images. The integration of an extended
multi-grained contrastive loss and generative adversarial loss enables the
adaptation of the student network from synthetic to real-world domains.
Extensive experiments on real-world degraded datasets demonstrate the
effectiveness of our proposed method. We compare our Uni-Removal framework with
state-of-the-art supervised and unsupervised methods, showcasing its promising
results in real-world image dehazing, deraining, and deblurring simultaneously
Visibility Recovery on Images Acquired in Attenuating Media. Application to Underwater, Fog, and Mammographic Imaging
When acquired in attenuating media, digital images often suffer from a
particularly complex degradation that reduces their visual quality, hindering
their suitability for further computational applications, or simply
decreasing the visual pleasantness for the user. In these cases, mathematical
image processing reveals itself as an ideal tool to recover some
of the information lost during the degradation process. In this dissertation,
we deal with three of such practical scenarios in which this problematic
is specially relevant, namely, underwater image enhancement, fog
removal and mammographic image processing. In the case of digital mammograms,
X-ray beams traverse human tissue, and electronic detectors
capture them as they reach the other side. However, the superposition
on a bidimensional image of three-dimensional structures produces lowcontrasted
images in which structures of interest suffer from a diminished
visibility, obstructing diagnosis tasks. Regarding fog removal, the loss
of contrast is produced by the atmospheric conditions, and white colour
takes over the scene uniformly as distance increases, also reducing visibility.
For underwater images, there is an added difficulty, since colour is not
lost uniformly; instead, red colours decay the fastest, and green and blue
colours typically dominate the acquired images. To address all these challenges,
in this dissertation we develop new methodologies that rely on: a)
physical models of the observed degradation, and b) the calculus of variations.
Equipped with this powerful machinery, we design novel theoretical
and computational tools, including image-dependent functional energies
that capture the particularities of each degradation model. These energies
are composed of different integral terms that are simultaneously
minimized by means of efficient numerical schemes, producing a clean,
visually-pleasant and useful output image, with better contrast and increased
visibility. In every considered application, we provide comprehensive
qualitative (visual) and quantitative experimental results to validate
our methods, confirming that the developed techniques outperform other
existing approaches in the literature