65 research outputs found

    Fast single image defogging with robust sky detection

    Get PDF
    Haze is a source of unreliability for computer vision applications in outdoor scenarios, and it is usually caused by atmospheric conditions. The Dark Channel Prior (DCP) has shown remarkable results in image defogging with three main limitations: 1) high time-consumption, 2) artifact generation, and 3) sky-region over-saturation. Therefore, current work has focused on improving processing time without losing restoration quality and avoiding image artifacts during image defogging. Hence in this research, a novel methodology based on depth approximations through DCP, local Shannon entropy, and Fast Guided Filter is proposed for reducing artifacts and improving image recovery on sky regions with low computation time. The proposed-method performance is assessed using more than 500 images from three datasets: Hybrid Subjective Testing Set from Realistic Single Image Dehazing (HSTS-RESIDE), the Synthetic Objective Testing Set from RESIDE (SOTS-RESIDE) and the HazeRD. Experimental results demonstrate that the proposed approach has an outstanding performance over state-of-the-art methods in reviewed literature, which is validated qualitatively and quantitatively through Peak Signal-to-Noise Ratio (PSNR), Naturalness Image Quality Evaluator (NIQE) and Structural SIMilarity (SSIM) index on retrieved images, considering different visual ranges, under distinct illumination and contrast conditions. Analyzing images with various resolutions, the method proposed in this work shows the lowest processing time under similar software and hardware conditions.This work was supported in part by the Centro en Investigaciones en Óptica (CIO) and the Consejo Nacional de Ciencia y Tecnología (CONACYT), and in part by the Barcelona Supercomputing Center.Peer ReviewedPostprint (published version

    Dehazed Image Quality Evaluation: From Partial Discrepancy to Blind Perception

    Full text link
    Image dehazing aims to restore spatial details from hazy images. There have emerged a number of image dehazing algorithms, designed to increase the visibility of those hazy images. However, much less work has been focused on evaluating the visual quality of dehazed images. In this paper, we propose a Reduced-Reference dehazed image quality evaluation approach based on Partial Discrepancy (RRPD) and then extend it to a No-Reference quality assessment metric with Blind Perception (NRBP). Specifically, inspired by the hierarchical characteristics of the human perceiving dehazed images, we introduce three groups of features: luminance discrimination, color appearance, and overall naturalness. In the proposed RRPD, the combined distance between a set of sender and receiver features is adopted to quantify the perceptually dehazed image quality. By integrating global and local channels from dehazed images, the RRPD is converted to NRBP which does not rely on any information from the references. Extensive experiment results on several dehazed image quality databases demonstrate that our proposed methods outperform state-of-the-art full-reference, reduced-reference, and no-reference quality assessment models. Furthermore, we show that the proposed dehazed image quality evaluation methods can be effectively applied to tune parameters for potential image dehazing algorithms

    Visibility and distortion measurement for no-reference dehazed image quality assessment via complex contourlet transform

    Get PDF
    Recently, most dehazed image quality assessment (DQA) methods mainly focus on the estimation of remaining haze, omitting the impact of distortions from the side effect of dehazing algorithms, which lead to their limited performance. Addressing this problem, we proposed a learning both Visibility and Distortion Aware features no-reference (NR) Dehazed image Quality Assessment method (VDA-DQA). Visibility aware features are exploited to characterize clarity optimization after dehazing, including the brightness, contrast, and sharpness aware feature extracted by complex contourlet transform (CCT). Then, distortion aware features are employed to measure the distortion artifacts of images, including the normalized histogram of local binary pattern (LBP) from the reconstructed dehazed image and the statistics of the CCT sub-bands corresponding to chroma and saturation map. Finally, all the above features are mapped into the quality scores by the support vector regression (SVR). Extensive experimental results on six public DQA datasets verify the superiority of proposed VDA-DQA in terms of the consistency with subjective visual perception, and outperforms the state-of-the-art methods.The source code of VDA-DQA is available at https://github.com/li181119/VDA-DQA

    Measuring atmospheric scattering from digital images of urban scenery using temporal polarization-based vision

    Get PDF
    Suspended atmospheric particles (particulate matter) are a form of air pollution that visually degrades urban scenery and is hazardous to human health and the environment. Current environmental monitoring devices are limited in their capability of measuring average particulate matter (PM) over large areas. Quantifying the visual effects of haze in digital images of urban scenery and correlating these effects to PM levels is a vital step in more practically monitoring our environment. Current image haze extraction algorithms remove all the haze from the scene and hence produce unnatural scenes for the sole purpose of enhancing vision. We present two algorithms which bridge the gap between image haze extraction and environmental monitoring. We provide a means of measuring atmospheric scattering from images of urban scenery by incorporating temporal knowledge. In doing so, we also present a method of recovering an accurate depthmap of the scene and recovering the scene without the visual effects of haze. We compare our algorithm to three known haze removal methods from the perspective of measuring atmospheric scattering, measuring depth and dehazing. The algorithms are composed of an optimization over a model of haze formation in images and an optimization using the constraint of constant depth over a sequence of images taken over time. These algorithms not only measure atmospheric scattering, but also recover a more accurate depthmap and dehazed image. The measurements of atmospheric scattering this research produces, can be directly correlated to PM levels and therefore pave the way to monitoring the health of the environment by visual means. Accurate atmospheric sensing from digital images is a challenging and under-researched problem. This work provides an important step towards a more practical and accurate visual means of measuring PM from digital images

    WaterFlow: Heuristic Normalizing Flow for Underwater Image Enhancement and Beyond

    Full text link
    Underwater images suffer from light refraction and absorption, which impairs visibility and interferes the subsequent applications. Existing underwater image enhancement methods mainly focus on image quality improvement, ignoring the effect on practice. To balance the visual quality and application, we propose a heuristic normalizing flow for detection-driven underwater image enhancement, dubbed WaterFlow. Specifically, we first develop an invertible mapping to achieve the translation between the degraded image and its clear counterpart. Considering the differentiability and interpretability, we incorporate the heuristic prior into the data-driven mapping procedure, where the ambient light and medium transmission coefficient benefit credible generation. Furthermore, we introduce a detection perception module to transmit the implicit semantic guidance into the enhancement procedure, where the enhanced images hold more detection-favorable features and are able to promote the detection performance. Extensive experiments prove the superiority of our WaterFlow, against state-of-the-art methods quantitatively and qualitatively.Comment: 10 pages, 13 figure
    corecore