195 research outputs found

    Physical-based optimization for non-physical image dehazing methods

    Get PDF
    Images captured under hazy conditions (e.g. fog, air pollution) usually present faded colors and loss of contrast. To improve their visibility, a process called image dehazing can be applied. Some of the most successful image dehazing algorithms are based on image processing methods but do not follow any physical image formation model, which limits their performance. In this paper, we propose a post-processing technique to alleviate this handicap by enforcing the original method to be consistent with a popular physical model for image formation under haze. Our results improve upon those of the original methods qualitatively and according to several metrics, and they have also been validated via psychophysical experiments. These results are particularly striking in terms of avoiding over-saturation and reducing color artifacts, which are the most common shortcomings faced by image dehazing methods

    Transmission Map and Atmospheric Light Guided Iterative Updater Network for Single Image Dehazing

    Full text link
    Hazy images obscure content visibility and hinder several subsequent computer vision tasks. For dehazing in a wide variety of hazy conditions, an end-to-end deep network jointly estimating the dehazed image along with suitable transmission map and atmospheric light for guidance could prove effective. To this end, we propose an Iterative Prior Updated Dehazing Network (IPUDN) based on a novel iterative update framework. We present a novel convolutional architecture to estimate channel-wise atmospheric light, which along with an estimated transmission map are used as priors for the dehazing network. Use of channel-wise atmospheric light allows our network to handle color casts in hazy images. In our IPUDN, the transmission map and atmospheric light estimates are updated iteratively using corresponding novel updater networks. The iterative mechanism is leveraged to gradually modify the estimates toward those appropriately representing the hazy condition. These updates occur jointly with the iterative estimation of the dehazed image using a convolutional neural network with LSTM driven recurrence, which introduces inter-iteration dependencies. Our approach is qualitatively and quantitatively found effective for synthetic and real-world hazy images depicting varied hazy conditions, and it outperforms the state-of-the-art. Thorough analyses of IPUDN through additional experiments and detailed ablation studies are also presented.Comment: First two authors contributed equally. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. Project Website: https://aupendu.github.io/iterative-dehaz

    A Study of Atmospheric Particles Removal in a Low Visibility Outdoor Single Image

    Get PDF
    Maximum limit of human visibility without the assistance of equipment is 1000 m based on International Commission on Illumination. The use of camera in the outdoor for the purpose of navigation, monitoring, remote sensing and robotic movement sometimes may yield images that are interrupted by haze, fog, smoke, steam and water drops. Fog is the random movement of water drops in the air that normally exists in the early morning. This disorder causes a differential image observed experiences low contrast, obscure, and difficult to identify targets. Analysis of the interference image can restore damaged image as a result of obstacles from atmospheric particles or drops of water during image observation. Generally, images with atmospheric particles contain a homogeneous texture like brightness and a heterogeneous texture which is the object that exists in the atmosphere. Pre-processing method based on the dark channel prior statistical measure of contrast vision and prior knowledge, still produces good image quality but less effective to overcome Halo problem or ring light, and strong lighting. This study aims to propel the development of machine vision industry aimed at navigation or monitoring for ground transportation, air or sea

    Non-aligned supervision for Real Image Dehazing

    Full text link
    Removing haze from real-world images is challenging due to unpredictable weather conditions, resulting in misaligned hazy and clear image pairs. In this paper, we propose a non-aligned supervision framework that consists of three networks - dehazing, airlight, and transmission. In particular, we explore a non-alignment setting by utilizing a clear reference image that is not aligned with the hazy input image to supervise the dehazing network through a multi-scale reference loss that compares the features of the two images. Our setting makes it easier to collect hazy/clear image pairs in real-world environments, even under conditions of misalignment and shift views. To demonstrate this, we have created a new hazy dataset called "Phone-Hazy", which was captured using mobile phones in both rural and urban areas. Additionally, we present a mean and variance self-attention network to model the infinite airlight using dark channel prior as position guidance, and employ a channel attention network to estimate the three-channel transmission. Experimental results show that our framework outperforms current state-of-the-art methods in the real-world image dehazing. Phone-Hazy and code will be available at https://github.com/hello2377/NSDNet

    Measuring atmospheric scattering from digital images of urban scenery using temporal polarization-based vision

    Get PDF
    Suspended atmospheric particles (particulate matter) are a form of air pollution that visually degrades urban scenery and is hazardous to human health and the environment. Current environmental monitoring devices are limited in their capability of measuring average particulate matter (PM) over large areas. Quantifying the visual effects of haze in digital images of urban scenery and correlating these effects to PM levels is a vital step in more practically monitoring our environment. Current image haze extraction algorithms remove all the haze from the scene and hence produce unnatural scenes for the sole purpose of enhancing vision. We present two algorithms which bridge the gap between image haze extraction and environmental monitoring. We provide a means of measuring atmospheric scattering from images of urban scenery by incorporating temporal knowledge. In doing so, we also present a method of recovering an accurate depthmap of the scene and recovering the scene without the visual effects of haze. We compare our algorithm to three known haze removal methods from the perspective of measuring atmospheric scattering, measuring depth and dehazing. The algorithms are composed of an optimization over a model of haze formation in images and an optimization using the constraint of constant depth over a sequence of images taken over time. These algorithms not only measure atmospheric scattering, but also recover a more accurate depthmap and dehazed image. The measurements of atmospheric scattering this research produces, can be directly correlated to PM levels and therefore pave the way to monitoring the health of the environment by visual means. Accurate atmospheric sensing from digital images is a challenging and under-researched problem. This work provides an important step towards a more practical and accurate visual means of measuring PM from digital images
    corecore