16 research outputs found

    Enhancing Visibility in Nighttime Haze Images Using Guided APSF and Gradient Adaptive Convolution

    Full text link
    Visibility in hazy nighttime scenes is frequently reduced by multiple factors, including low light, intense glow, light scattering, and the presence of multicolored light sources. Existing nighttime dehazing methods often struggle with handling glow or low-light conditions, resulting in either excessively dark visuals or unsuppressed glow outputs. In this paper, we enhance the visibility from a single nighttime haze image by suppressing glow and enhancing low-light regions. To handle glow effects, our framework learns from the rendered glow pairs. Specifically, a light source aware network is proposed to detect light sources of night images, followed by the APSF (Angular Point Spread Function)-guided glow rendering. Our framework is then trained on the rendered images, resulting in glow suppression. Moreover, we utilize gradient-adaptive convolution, to capture edges and textures in hazy scenes. By leveraging extracted edges and textures, we enhance the contrast of the scene without losing important structural details. To boost low-light intensity, our network learns an attention map, then adjusted by gamma correction. This attention has high values on low-light regions and low values on haze and glow regions. Extensive evaluation on real nighttime haze images, demonstrates the effectiveness of our method. Our experiments demonstrate that our method achieves a PSNR of 30.38dB, outperforming state-of-the-art methods by 13%\% on GTA5 nighttime haze dataset. Our data and code is available at: \url{https://github.com/jinyeying/nighttime_dehaze}.Comment: Accepted to ACM'MM2023, https://github.com/jinyeying/nighttime_dehaz

    Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding

    Full text link
    This work addresses the problem of semantic scene understanding under dense fog. Although considerable progress has been made in semantic scene understanding, it is mainly related to clear-weather scenes. Extending recognition methods to adverse weather conditions such as fog is crucial for outdoor applications. In this paper, we propose a novel method, named Curriculum Model Adaptation (CMAda), which gradually adapts a semantic segmentation model from light synthetic fog to dense real fog in multiple steps, using both synthetic and real foggy data. In addition, we present three other main stand-alone contributions: 1) a novel method to add synthetic fog to real, clear-weather scenes using semantic input; 2) a new fog density estimator; 3) the Foggy Zurich dataset comprising 38083808 real foggy images, with pixel-level semantic annotations for 1616 images with dense fog. Our experiments show that 1) our fog simulation slightly outperforms a state-of-the-art competing simulation with respect to the task of semantic foggy scene understanding (SFSU); 2) CMAda improves the performance of state-of-the-art models for SFSU significantly by leveraging unlabeled real foggy data. The datasets and code are publicly available.Comment: final version, ECCV 201

    The state-of-the-art progress in cloud detection, identification, and tracking approaches: a systematic review

    Get PDF
    A cloud is a mass of water vapor floating in the atmosphere. It is visible from the ground and can remain at a variable height for some time. Clouds are very important because their interaction with the rest of the atmosphere has a decisive influence on weather, for instance by sunlight occlusion or by bringing rain. Weather denotes atmosphere behavior and is determinant in several human activities, such as agriculture or energy capture. Therefore, cloud detection is an important process about which several methods have been investigated and published in the literature. The aim of this paper is to review some of such proposals and the papers that have been analyzed and discussed can be, in general, classified into three types. The first one is devoted to the analysis and explanation of clouds and their types, and about existing imaging systems. Regarding cloud detection, dealt with in a second part, diverse methods have been analyzed, i.e., those based on the analysis of satellite images and those based on the analysis of images from cameras located on Earth. The last part is devoted to cloud forecast and tracking. Cloud detection from both systems rely on thresholding techniques and a few machine-learning algorithms. To compute the cloud motion vectors for cloud tracking, correlation-based methods are commonly used. A few machine-learning methods are also available in the literature for cloud tracking, and have been discussed in this paper too

    Patch-Wise Blind Image Deblurring via Michelson Channel Prior

    Get PDF

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin
    corecore