52 research outputs found

    Towards Artifacts-free Image Defogging

    Get PDF
    In this paper we present a novel defogging technique,named CurL-Defog, aimed at minimizing the creation of unwanted artifacts during the defogging process. The majority of learning based defogging approaches rely on paired data (i.e.,the same images with and without fog), where fog is artificially added to clear images: this often provides good results on mildly fogged images but does not generalize well to real difficult cases. On the other hand, the models trained with real unpaired data (e.g. CycleGAN) can provide visually impressive results but they often produce unwanted artifacts. In this paper we propose a curriculum learning strategy coupled with an enhanced CycleGAN model in order to reduce the number of produced artifacts, while maintaining state-of-the-art performance in terms of contrast enhancement and image reconstruction. We also introduce a new metric, called HArD (Hazy Artifact Detector) to numerically quantify the amount of artifacts in the defogged images, thus avoiding the tedious and subjective manual inspection of the results. The proposed approach compares favorably with state-of-the-art techniques on both real and synthetic datasets

    Artifact-free single image defogging

    Get PDF
    none2noIn this paper, we present a novel defogging technique, named CurL-Defog, with the aim of minimizing the insertion of artifacts while maintaining good contrast restoration and visibility enhancement. Many learning-based defogging approaches rely on paired data, where fog is artificially added to clear images; this usually provides good results on mildly fogged images but is not effective for difficult cases. On the other hand, the models trained with real data can produce visually impressive results, but unwanted artifacts are often present. We propose a curriculum learning strategy and an enhanced CycleGAN model to reduce the number of produced artifacts, where both synthetic and real data are used in the training procedure. We also introduce a new metric, called HArD (Hazy Artifact Detector), to numerically quantify the number of artifacts in the defogged images, thus avoiding the tedious and subjective manual inspection of the results. HArD is then combined with other defogging indicators to produce a solid metric that is not deceived by the presence of artifacts. The proposed approach compares favorably with state-of-the-art techniques on both real and synthetic datasets.noneGraffieti G.; Maltoni D.Graffieti G.; Maltoni D

    Incident Light Frequency-based Image Defogging Algorithm

    Get PDF
    Considering the problem of color distortion caused by the defogging algorithm based on dark channel prior, an improved algorithm was proposed to calculate the transmittance of all channels respectively. First, incident light frequency's effect on the transmittance of various color channels was analyzed according to the Beer-Lambert's Law, from which a proportion among various channel transmittances was derived; afterwards, images were preprocessed by down-sampling to refine transmittance, and then the original size was restored to enhance the operational efficiency of the algorithm; finally, the transmittance of all color channels was acquired in accordance with the proportion, and then the corresponding transmittance was used for image restoration in each channel. The experimental results show that compared with the existing algorithm, this improved image defogging algorithm could make image colors more natural, solve the problem of slightly higher color saturation caused by the existing algorithm, and shorten the operation time by four to nine times

    Style Transfer with Generative Adversarial Networks

    Get PDF
    This dissertation is focused on trying to use concepts from style transfer and image-to-image translation to address the problem of defogging. Defogging (or dehazing) is the ability to remove fog from an image, restoring it as if the photograph was taken during optimal weather conditions. The task of defogging is of particular interest in many fields, such as surveillance or self driving cars. In this thesis an unpaired approach to defogging is adopted, trying to translate a foggy image to the correspondent clear picture without having pairs of foggy and ground truth haze-free images during training. This approach is particularly significant, due to the difficult of gathering an image collection of exactly the same scenes with and without fog. Many of the models and techniques used in this dissertation already existed in literature, but they are extremely difficult to train, and often it is highly problematic to obtain the desired behavior. Our contribute was a systematic implementative and experimental activity, conducted with the aim of attaining a comprehensive understanding of how these models work, and the role of datasets and training procedures in the final results. We also analyzed metrics and evaluation strategies, in order to seek to assess the quality of the presented model in the most correct and appropriate manner. First, the feasibility of an unpaired approach to defogging was analyzed, using the cycleGAN model. Then, the base model was enhanced with a cycle perceptual loss, inspired by style transfer techniques. Next, the role of the training set was investigated, showing that improving the quality of data is at least as important as the utilization of more powerful models. Finally, our approach is compared with state-of-the art defogging methods, showing that the quality of our results is in line with preexisting approaches, even if our model was trained using unpaired data

    Holistic Attention-Fusion Adversarial Network for Single Image Defogging

    Full text link
    Adversarial learning-based image defogging methods have been extensively studied in computer vision due to their remarkable performance. However, most existing methods have limited defogging capabilities for real cases because they are trained on the paired clear and synthesized foggy images of the same scenes. In addition, they have limitations in preserving vivid color and rich textual details in defogging. To address these issues, we develop a novel generative adversarial network, called holistic attention-fusion adversarial network (HAAN), for single image defogging. HAAN consists of a Fog2Fogfree block and a Fogfree2Fog block. In each block, there are three learning-based modules, namely, fog removal, color-texture recovery, and fog synthetic, that are constrained each other to generate high quality images. HAAN is designed to exploit the self-similarity of texture and structure information by learning the holistic channel-spatial feature correlations between the foggy image with its several derived images. Moreover, in the fog synthetic module, we utilize the atmospheric scattering model to guide it to improve the generative quality by focusing on an atmospheric light optimization with a novel sky segmentation network. Extensive experiments on both synthetic and real-world datasets show that HAAN outperforms state-of-the-art defogging methods in terms of quantitative accuracy and subjective visual quality.Comment: 13 pages, 10 figure

    Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding

    Full text link
    This work addresses the problem of semantic scene understanding under dense fog. Although considerable progress has been made in semantic scene understanding, it is mainly related to clear-weather scenes. Extending recognition methods to adverse weather conditions such as fog is crucial for outdoor applications. In this paper, we propose a novel method, named Curriculum Model Adaptation (CMAda), which gradually adapts a semantic segmentation model from light synthetic fog to dense real fog in multiple steps, using both synthetic and real foggy data. In addition, we present three other main stand-alone contributions: 1) a novel method to add synthetic fog to real, clear-weather scenes using semantic input; 2) a new fog density estimator; 3) the Foggy Zurich dataset comprising 38083808 real foggy images, with pixel-level semantic annotations for 1616 images with dense fog. Our experiments show that 1) our fog simulation slightly outperforms a state-of-the-art competing simulation with respect to the task of semantic foggy scene understanding (SFSU); 2) CMAda improves the performance of state-of-the-art models for SFSU significantly by leveraging unlabeled real foggy data. The datasets and code are publicly available.Comment: final version, ECCV 201

    Estimation of Parameters in Atmospheric Scattering Dehazing Model in Accordance with Visual Characteristics

    Get PDF
    In view of the problem that the restoration effect of the daytime defogging algorithm is not ideal, especially the over-enhancement and color distortion in the sky and its nearby. A new parameter estimation method for atmospheric scattering model is proposed. Firstly, the sky and non-sky areas are segmented. Then, estimating the atmospheric light at the junction, and then corresponding restrictions on the value of transmittance according to the change of the depth of field. Finally, the transmittance is optimized by context-based regularization, so that the final image after dehazing is more in line with the visual characteristics. Through the subjective comparison and analysis with the existing mainstream algorithms, the dehazing effect of the proposed method has the advantages of low noise and high color recovery, especially in the sky. The restoration with the non-sky junction is the best, enriching the details that other algorithms have not restored and the colors are true and natural

    Contrast enhancement and exposure correction using a structure-aware distribution fitting

    Get PDF
    Realce de contraste e correção de exposição são úteis em aplicações domésticas e técnicas, no segundo caso como uma etapa de pré-processamento para outras técnicas ou para ajudar a observação humana. Frequentemente, uma transformação localmente adaptativa é mais adequada para a tarefa do que uma transformação global. Por exemplo, objetos e regiões podem ter níveis de iluminação muito diferentes, fenômenos físicos podem comprometer o contraste em algumas regiões mas não em outras, ou pode ser desejável ter alta visibilidade de detalhes em todas as partes da imagem. Para esses casos, métodos de realce de imagem locais são preferíveis. Embora existam muitos métodos de realce de contraste e correção de exposição disponíveis na literatura, não há uma solução definitiva que forneça um resultado satisfatório em todas as situações, e novos métodos surgem a cada ano. Em especial, os métodos tradicionais baseados em equalização adaptativa de histograma sofrem dos efeitos checkerboard e staircase e de excesso de realce. Esta dissertação propõe um método para realce de contraste e correção de exposição em imagens chamado Structure-Aware Distribution Stretching (SADS). O método ajusta regionalmente à imagem um modelo paramétrico de distribuição de probabilidade, respeitando a estrutura da imagem e as bordas entre as regiões. Isso é feito usando versões regionais das expressões clássicas de estimativa dos parâmetros da distribuição, que são obtidas substituindo a mé- dia amostral presente nas expressões originais por um filtro de suavização que preserva as bordas. Após ajustar a distribuição, a função de distribuição acumulada (CDF) do modelo ajustado e a inversa da CDF da distribuição desejada são aplicadas. Uma heurística ciente de estrutura que detecta regiões suaves é proposta e usada para atenuar as transformações em regiões planas. SADS foi comparado a outros métodos da literatura usando métricas objetivas de avaliação de qualidade de imagem (IQA) sem referência e com referência completa nas tarefas de realce de contraste e correção de exposição simultâneos e na tarefa de defogging/dehazing. Os experimentos indicam um desempenho geral superior do SADS em relação aos métodos comparados para os conjuntos de imagens usados, de acordo com as métricas IQA adotadas.Contrast enhancement and exposure correction are useful in domestic and technical applications, the latter as a preprocessing step for other techniques or for aiding human observation. Often, a locally adaptive transformation is more suitable for the task than a global transformation. For example, objects and regions may have very different levels of illumination, physical phenomena may compromise the contrast at some regions but not at others, or it may be desired to have high visibility of details in all parts of the image. For such cases, local image enhancement methods are preferable. Although there are many contrast enhancement and exposure correction methods available in the literature, there is no definitive solution that provides a satisfactory result in all situations, and new methods emerge each year. In special, traditional adaptive histogram equalization-based methods suffer from checkerboard and staircase effects and from over enhancement. This dissertation proposes a method for contrast enhancement and exposure correction in images named Structure-Aware Distribution Stretching (SADS). The method fits a parametric model of probability distribution to the image regionally while respecting the image structure and edges between regions. This is done using regional versions of the classical expressions for estimating the parameters of the distribution, which are obtained by replacing the sample mean present in the original expressions by an edge-preserving smoothing filter. After fitting the distribution, the cumulative distribution function (CDF) of the adjusted model and the inverse of the CDF of the desired distribution are applied. A structure-aware heuristic to indicate smooth regions is proposed and used to attenuate the transformations in flat regions. SADS was compared with other methods from the literature using objective no-reference and full-reference image quality assessment (IQA) metrics in the tasks of simultaneous contrast enhancement and exposure correction and in the task of defogging/dehazing. The experiments indicate a superior overall performance of SADS with respect to the compared methods for the image sets used, according to the IQA metrics adopted
    • …
    corecore