653 research outputs found

    Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather

    Full text link
    The fusion of multimodal sensor streams, such as camera, lidar, and radar measurements, plays a critical role in object detection for autonomous vehicles, which base their decision making on these inputs. While existing methods exploit redundant information in good environmental conditions, they fail in adverse weather where the sensory streams can be asymmetrically distorted. These rare "edge-case" scenarios are not represented in available datasets, and existing fusion architectures are not designed to handle them. To address this challenge we present a novel multimodal dataset acquired in over 10,000km of driving in northern Europe. Although this dataset is the first large multimodal dataset in adverse weather, with 100k labels for lidar, camera, radar, and gated NIR sensors, it does not facilitate training as extreme weather is rare. To this end, we present a deep fusion network for robust fusion without a large corpus of labeled training data covering all asymmetric distortions. Departing from proposal-level fusion, we propose a single-shot model that adaptively fuses features, driven by measurement entropy. We validate the proposed method, trained on clean data, on our extensive validation dataset. Code and data are available here https://github.com/princeton-computational-imaging/SeeingThroughFog

    Haze Removal in Color Images Using Hybrid Dark Channel Prior and Bilateral Filter

    Get PDF
    Haze formation is the combination of airlight and attenuation. Attenuation decreases the contrast and airlight increases the whiteness in the scene. Atmospheric conditions created by floting particles such as fog and haze, severely degrade image quality. Removing haze from a single image of a weather-degraded scene found to be a difficult task because the haze is dependent on the unknown depth information. Haze removal algorithms become more beneficial for many vision applications. It is found that most of the existing researchers have neglected many issues; i.e. no technique is accurate for different kind of circumstances. The existing methods have neglected many issues like noise reduction and uneven illumination which will be presented in the output image of the existing haze removal algorithms. This dissertation has proposed a new haze removal technique HDCP which will integrate dark channel prior with CLAHE to remove the haze from color images and bilateral filter is used to reduce noise from images. Poor visibility not only degrades the perceptual image quality but it also affects the performance of computer vision algorithms such as surveillance system, object detection, tracking and segmentation. The proposed algorithm is designed and implemented in MATLAB. The comparison between dark channel prior and the proposed algorithm is also drawn based upon some standard parameters. The comparison has shown that the proposed algorithm has shown quite effective results

    Temporal behavior and processing of the LiDAR signal in fog

    Get PDF
    The interest in LiDAR imaging systems has recently increased in outdoor ground-based applications related to computer vision, in fields like autonomous vehicles. However, for the complete settling of the technology, there are still obstacles related to outdoor performance, being its use in adverse weather conditions one of the most challenging. When working in bad weather, data shown in point clouds is unreliable and its temporal behavior is unknown. We have designed, constructed, and tested a scanning-pulsed LiDAR imaging system with outstanding characteristics related to optoelectronic modifications, in particular including digitization capabilities of each of the pulses. The system performance was tested in a macro-scale fog chamber and, using the collected data, two relevant phenomena were identified: the backscattering signal of light that first interacts with the media and false-positive points that appear due to the scattering properties of the media. Digitization of the complete signal can be used to develop algorithms to identify and get rid of them. Our contribution is related to the digitization, analysis, and characterization of the acquired signal when steering to a target under foggy conditions, as well as the proposal of different strategies to improve point clouds generated in these conditions.This work was supported by the Spanish Ministry of Science and Innovation (MICINN) under the project PID2020-119484RB-I00. The first author gratefully acknowledges the Universitat Politècnica de Catalunya and Banco Santander for the financial support of her predoctoral research grant.Peer ReviewedPostprint (author's final draft
    corecore