11 research outputs found

    A Study of Atmospheric Particles Removal in a Low Visibility Outdoor Single Image

    Get PDF
    Maximum limit of human visibility without the assistance of equipment is 1000 m based on International Commission on Illumination. The use of camera in the outdoor for the purpose of navigation, monitoring, remote sensing and robotic movement sometimes may yield images that are interrupted by haze, fog, smoke, steam and water drops. Fog is the random movement of water drops in the air that normally exists in the early morning. This disorder causes a differential image observed experiences low contrast, obscure, and difficult to identify targets. Analysis of the interference image can restore damaged image as a result of obstacles from atmospheric particles or drops of water during image observation. Generally, images with atmospheric particles contain a homogeneous texture like brightness and a heterogeneous texture which is the object that exists in the atmosphere. Pre-processing method based on the dark channel prior statistical measure of contrast vision and prior knowledge, still produces good image quality but less effective to overcome Halo problem or ring light, and strong lighting. This study aims to propel the development of machine vision industry aimed at navigation or monitoring for ground transportation, air or sea

    Using User Generated Online Photos to Estimate and Monitor Air Pollution in Major Cities

    Full text link
    With the rapid development of economy in China over the past decade, air pollution has become an increasingly serious problem in major cities and caused grave public health concerns in China. Recently, a number of studies have dealt with air quality and air pollution. Among them, some attempt to predict and monitor the air quality from different sources of information, ranging from deployed physical sensors to social media. These methods are either too expensive or unreliable, prompting us to search for a novel and effective way to sense the air quality. In this study, we propose to employ the state of the art in computer vision techniques to analyze photos that can be easily acquired from online social media. Next, we establish the correlation between the haze level computed directly from photos with the official PM 2.5 record of the taken city at the taken time. Our experiments based on both synthetic and real photos have shown the promise of this image-based approach to estimating and monitoring air pollution.Comment: ICIMCS '1

    Contrast restoration of road images taken in foggy weather

    Full text link

    Semantic Understanding of Foggy Scenes with Purely Synthetic Data

    Full text link
    This work addresses the problem of semantic scene understanding under foggy road conditions. Although marked progress has been made in semantic scene understanding over the recent years, it is mainly concentrated on clear weather outdoor scenes. Extending semantic segmentation methods to adverse weather conditions like fog is crucially important for outdoor applications such as self-driving cars. In this paper, we propose a novel method, which uses purely synthetic data to improve the performance on unseen real-world foggy scenes captured in the streets of Zurich and its surroundings. Our results highlight the potential and power of photo-realistic synthetic images for training and especially fine-tuning deep neural nets. Our contributions are threefold, 1) we created a purely synthetic, high-quality foggy dataset of 25,000 unique outdoor scenes, that we call Foggy Synscapes and plan to release publicly 2) we show that with this data we outperform previous approaches on real-world foggy test data 3) we show that a combination of our data and previously used data can even further improve the performance on real-world foggy data.Comment: independent class IoU scores corrected for BiSiNet architectur

    Markov Random Field model for single image defogging

    Full text link
    Fog reduces contrast and thus the visibility of vehicles and obstacles for drivers. Each year, this causes traffic accidents. Fog is caused by a high concentration of very fine water droplets in the air. When light hits these droplets, it is scattered and this results in a dense white background, called the atmospheric veil. As pointed in [1], Advanced Driver Assistance Systems (ADAS) based on the display of defogged images from a camera may help the driver by improving objects visibility in the image and thus may leads to a decrease of fatality and injury rates. In the last few years, the problem of single image defogging has attracted attention in the image processing community. Being an ill-posed problem, several methods have been proposed. However, a few among of these methods are dedicated to the processing of road images. One of the first exception is the method in [2], [1] where a planar constraint is introduced to improve the restoration of the road area, assuming an approximately flat road. The single image defogging problem being ill-posed, the choice of the Bayesian approach seems adequate to set this problem as an inference problem. A first Markov Random Field (MRF) approach of the problem has been proposed recently in [3]. However, this method is not dedicated to road images. In this paper, we propose a novel MRF model of the single image defogging problem which applies to all kinds of images but can also easily be refined to obtain better results on road images using the planar constraint. A comparative study and quantitative evaluation with several state-of-the-art algorithms is presented. This evaluation demonstrates that the proposed MRF model allows to derive a new algorithm which produces better quality results, in particular in case of a noisy input image

    Adaptive Deep Learning Detection Model for Multi-Foggy Images

    Get PDF
    The fog has different features and effects within every single environment. Detection whether there is fog in the image is considered a challenge and giving the type of fog has a substantial enlightening effect on image defogging. Foggy scenes have different types such as scenes based on fog density level and scenes based on fog type. Machine learning techniques have a significant contribution to the detection of foggy scenes. However, most of the existing detection models are based on traditional machine learning models, and only a few studies have adopted deep learning models. Furthermore, most of the existing machines learning detection models are based on fog density-level scenes. However, to the best of our knowledge, there is no such detection model based on multi-fog type scenes have presented yet. Therefore, the main goal of our study is to propose an adaptive deep learning model for the detection of multi-fog types of images. Moreover, due to the lack of a publicly available dataset for inhomogeneous, homogenous, dark, and sky foggy scenes, a dataset for multi-fog scenes is presented in this study (https://github.com/Karrar-H-Abdulkareem/Multi-Fog-Dataset). Experiments were conducted in three stages. First, the data collection phase is based on eight resources to obtain the multi-fog scene dataset. Second, a classification experiment is conducted based on the ResNet-50 deep learning model to obtain detection results. Third, evaluation phase where the performance of the ResNet-50 detection model has been compared against three different models. Experimental results show that the proposed model has presented a stable classification performance for different foggy images with a 96% score for each of Classification Accuracy Rate (CAR), Recall, Precision, F1-Score which has specific theoretical and practical significance. Our proposed model is suitable as a pre-processing step and might be considered in different real-time applications

    Mapping and Deep Analysis of Image Dehazing: Coherent Taxonomy, Datasets, Open Challenges, Motivations, and Recommendations

    Get PDF
    Our study aims to review and analyze the most relevant studies in the image dehazing field. Many aspects have been deemed necessary to provide a broad understanding of various studies that have been examined through surveying the existing literature. These aspects are as follows: datasets that have been used in the literature, challenges that other researchers have faced, motivations, and recommendations for diminishing the obstacles in the reported literature. A systematic protocol is employed to search all relevant articles on image dehazing, with variations in keywords, in addition to searching for evaluation and benchmark studies. The search process is established on three online databases, namely, IEEE Xplore, Web of Science (WOS), and ScienceDirect (SD), from 2008 to 2021. These indices are selected because they are sufficient in terms of coverage. Along with definition of the inclusion and exclusion criteria, we include 152 articles to the final set. A total of 55 out of 152 articles focused on various studies that conducted image dehazing, and 13 out 152 studies covered most of the review papers based on scenarios and general overviews. Finally, most of the included articles centered on the development of image dehazing algorithms based on real-time scenario (84/152) articles. Image dehazing removes unwanted visual effects and is often considered an image enhancement technique, which requires a fully automated algorithm to work under real-time outdoor applications, a reliable evaluation method, and datasets based on different weather conditions. Many relevant studies have been conducted to meet these critical requirements. We conducted objective image quality assessment experimental comparison of various image dehazing algorithms. In conclusions unlike other review papers, our study distinctly reflects different observations on image dehazing areas. We believe that the result of this study can serve as a useful guideline for practitioners who are looking for a comprehensive view on image dehazing

    Improved Visibility of Road Scene Images under Heterogeneous Fog

    No full text
    One source of accidents when driving a vehicle is the presence of homogeneous and heterogeneous fog. Fog fades the colors and reduces the contrast of the observed objects with respect to their distances. Various camera-based Advanced Driver Assistance Systems (ADAS) can be improved if efficient algorithms are designed for visibility enhancement of road images. The visibility enhancement algorithm proposed in [1] is not dedicated to road images and thus it leads to limited quality results on images of this kind. In this paper, we interpret the algorithm in [1] as the inference of the local atmospheric veil subject to two constraints. From this interpretation, we propose an extended algorithm which better handles road images by taking into account that a large part of the image can be assumed to be a planar road. The advantages of the proposed local algorithm are its speed, the possibility to handle both color images or gray-level images, and its small number of parameters. A comparative study and quantitative evaluation with other state-of-the-art algorithms is proposed on synthetic images with several types of generated fog. This evaluation demonstrates that the new algorithm produces similar quality results with homogeneous fog and that it is able to better deal with the presence of heterogeneous fog
    corecore