190 research outputs found

    Adaptive Deep Learning Detection Model for Multi-Foggy Images

    Get PDF
    The fog has different features and effects within every single environment. Detection whether there is fog in the image is considered a challenge and giving the type of fog has a substantial enlightening effect on image defogging. Foggy scenes have different types such as scenes based on fog density level and scenes based on fog type. Machine learning techniques have a significant contribution to the detection of foggy scenes. However, most of the existing detection models are based on traditional machine learning models, and only a few studies have adopted deep learning models. Furthermore, most of the existing machines learning detection models are based on fog density-level scenes. However, to the best of our knowledge, there is no such detection model based on multi-fog type scenes have presented yet. Therefore, the main goal of our study is to propose an adaptive deep learning model for the detection of multi-fog types of images. Moreover, due to the lack of a publicly available dataset for inhomogeneous, homogenous, dark, and sky foggy scenes, a dataset for multi-fog scenes is presented in this study (https://github.com/Karrar-H-Abdulkareem/Multi-Fog-Dataset). Experiments were conducted in three stages. First, the data collection phase is based on eight resources to obtain the multi-fog scene dataset. Second, a classification experiment is conducted based on the ResNet-50 deep learning model to obtain detection results. Third, evaluation phase where the performance of the ResNet-50 detection model has been compared against three different models. Experimental results show that the proposed model has presented a stable classification performance for different foggy images with a 96% score for each of Classification Accuracy Rate (CAR), Recall, Precision, F1-Score which has specific theoretical and practical significance. Our proposed model is suitable as a pre-processing step and might be considered in different real-time applications

    Dark Model Adaptation: Semantic Image Segmentation from Daytime to Nighttime

    Full text link
    This work addresses the problem of semantic image segmentation of nighttime scenes. Although considerable progress has been made in semantic image segmentation, it is mainly related to daytime scenarios. This paper proposes a novel method to progressive adapt the semantic models trained on daytime scenes, along with large-scale annotations therein, to nighttime scenes via the bridge of twilight time -- the time between dawn and sunrise, or between sunset and dusk. The goal of the method is to alleviate the cost of human annotation for nighttime images by transferring knowledge from standard daytime conditions. In addition to the method, a new dataset of road scenes is compiled; it consists of 35,000 images ranging from daytime to twilight time and to nighttime. Also, a subset of the nighttime images are densely annotated for method evaluation. Our experiments show that our method is effective for model adaptation from daytime scenes to nighttime scenes, without using extra human annotation.Comment: Accepted to International Conference on Intelligent Transportation Systems (ITSC 2018

    Guided Curriculum Model Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation

    Full text link
    Most progress in semantic segmentation reports on daytime images taken under favorable illumination conditions. We instead address the problem of semantic segmentation of nighttime images and improve the state-of-the-art, by adapting daytime models to nighttime without using nighttime annotations. Moreover, we design a new evaluation framework to address the substantial uncertainty of semantics in nighttime images. Our central contributions are: 1) a curriculum framework to gradually adapt semantic segmentation models from day to night via labeled synthetic images and unlabeled real images, both for progressively darker times of day, which exploits cross-time-of-day correspondences for the real images to guide the inference of their labels; 2) a novel uncertainty-aware annotation and evaluation framework and metric for semantic segmentation, designed for adverse conditions and including image regions beyond human recognition capability in the evaluation in a principled fashion; 3) the Dark Zurich dataset, which comprises 2416 unlabeled nighttime and 2920 unlabeled twilight images with correspondences to their daytime counterparts plus a set of 151 nighttime images with fine pixel-level annotations created with our protocol, which serves as a first benchmark to perform our novel evaluation. Experiments show that our guided curriculum adaptation significantly outperforms state-of-the-art methods on real nighttime sets both for standard metrics and our uncertainty-aware metric. Furthermore, our uncertainty-aware evaluation reveals that selective invalidation of predictions can lead to better results on data with ambiguous content such as our nighttime benchmark and profit safety-oriented applications which involve invalid inputs.Comment: ICCV 2019 camera-read

    A Comprehensive Review on Fog Removal Techniques in Single Images

    Get PDF
    Haze is framed because of the two major phenomena that are nature constriction and the air light. This paper introduces an audit on the diverse methods to expel fog from pictures caught in murky environment to recuperate a superior and enhanced nature of murkiness free pictures. Pictures of open air scenes regularly contain corruption because of cloudiness, bringing about difference decrease and shading blurring. Haze evacuation overall called perceivability rebuilding alludes to various frameworks that assume to reduce or empty the corruption that have happened while the computerized picture was being gained. This paper is an audit on the different mist evacuation calculations. Cloudiness evacuation techniques recuperate the shading and differentiation of the scene.In this paper, different haze evacuation methods have been examined. DOI: 10.17762/ijritcc2321-8169.15052

    Background lighting clutters: How do they affect visual saliency of urban objects?

    Get PDF
    The current study aims to create some general guidance for designers to better understand the impact of background lighting in their design and as a result minimize its effect on the visual saliency of urban objects. There are few studies about how lighting clutters can affect and decrease the visual saliency of illuminated urban objects at night. Lack of information in this area has resulted in increasing luminance to be recognized as one of the main tools to enhance the saliency of urban objects at night. To address this matter a study was performed to investigate the effect of proximity of lighting clutters on visual saliency of urban objects. A forced choice pair comparison method was employed, in which two test images of an urban object in different conditions of luminance contrast and proximity of light patterns were compared. Test participants reported in which image the target appeared more salient. Results show there is a progressive increase in saliency value by increasing the gap between the target and the background lighting when the luminance contrast of the target is three or higher. However, the critical area around the object with the highest effect lies between 0.5° and 1° visual angle. Removing light patterns beyond that point creates negligible effect. The findings of this study could inform development of future models of visual recognition in the road environment, models which can address the important effects of environmental context in addition to photometric variables (luminance and contrast) that are the only factors considered in traditional models of ‘Visibility Level.

    Image Enhancement in Foggy Images using Dark Channel Prior and Guided Filter

    Get PDF
    Haze is very apparent in images shot during periods of bad weather (fog). The image's clarity and readability are both diminished as a result. As part of this work, we suggest a method for improving the quality of the hazy image and for identifying any objects hidden inside it. To address this, we use the picture enhancement techniques of Dark Channel Prior and Guided Filter. The Saliency map is then used to segment the improved image and identify passing vehicles. Lastly, we describe our method for calculating the actual distance in units from a camera-equipped vehicle of an item (another vehicle).Our proposed solution can warn the driver based on the distance to help them prevent an accident. Our suggested technology improves images and accurately detects vehicles nearly 100% of the time

    RGBT Salient Object Detection: A Large-scale Dataset and Benchmark

    Full text link
    Salient object detection in complex scenes and environments is a challenging research topic. Most works focus on RGB-based salient object detection, which limits its performance of real-life applications when confronted with adverse conditions such as dark environments and complex backgrounds. Taking advantage of RGB and thermal infrared images becomes a new research direction for detecting salient object in complex scenes recently, as thermal infrared spectrum imaging provides the complementary information and has been applied to many computer vision tasks. However, current research for RGBT salient object detection is limited by the lack of a large-scale dataset and comprehensive benchmark. This work contributes such a RGBT image dataset named VT5000, including 5000 spatially aligned RGBT image pairs with ground truth annotations. VT5000 has 11 challenges collected in different scenes and environments for exploring the robustness of algorithms. With this dataset, we propose a powerful baseline approach, which extracts multi-level features within each modality and aggregates these features of all modalities with the attention mechanism, for accurate RGBT salient object detection. Extensive experiments show that the proposed baseline approach outperforms the state-of-the-art methods on VT5000 dataset and other two public datasets. In addition, we carry out a comprehensive analysis of different algorithms of RGBT salient object detection on VT5000 dataset, and then make several valuable conclusions and provide some potential research directions for RGBT salient object detection.Comment: 12 pages, 10 figures https://github.com/lz118/RGBT-Salient-Object-Detectio
    corecore