45,840 research outputs found

    A new technique based on mini-UAS for estimating water and bottom radiance contributions in optically shallow waters

    Get PDF
    The mapping of nearshore bathymetry based on spaceborne radiometers is commonly used for QC ocean colour products in littoral waters. However, the accuracy of these estimates is relatively poor with respect to those derived from Lidar systems due in part to the large uncertainties of bottom depth retrievals caused by changes on bottom reflectivity. Here, we present a method based on mini unmanned aerial vehicles (UAS) images for discriminating bottom-reflected and water radiance components by taking advantage of shadows created by different structures sitting on the bottom boundary. Aerial surveys were done with a drone Draganfly X4P during October 1 2013 in optically shallow waters of the Saint Lawrence Estuary, and during low tide. Colour images with a spatial resolution of 3 mm were obtained with an Olympus EPM-1 camera at 10 m height. Preliminary results showed an increase of the relative difference between bright and dark pixels (dP) toward the red wavelengths of the camera's receiver. This is suggesting that dP values can be potentially used as a quantitative proxy of bottom reflectivity after removing artefacts related to Fresnel reflection and bottom adjacency effects.Peer ReviewedPostprint (published version

    Mask-ShadowGAN: Learning to Remove Shadows from Unpaired Data

    Full text link
    This paper presents a new method for shadow removal using unpaired data, enabling us to avoid tedious annotations and obtain more diverse training samples. However, directly employing adversarial learning and cycle-consistency constraints is insufficient to learn the underlying relationship between the shadow and shadow-free domains, since the mapping between shadow and shadow-free images is not simply one-to-one. To address the problem, we formulate Mask-ShadowGAN, a new deep framework that automatically learns to produce a shadow mask from the input shadow image and then takes the mask to guide the shadow generation via re-formulated cycle-consistency constraints. Particularly, the framework simultaneously learns to produce shadow masks and learns to remove shadows, to maximize the overall performance. Also, we prepared an unpaired dataset for shadow removal and demonstrated the effectiveness of Mask-ShadowGAN on various experiments, even it was trained on unpaired data.Comment: Accepted to ICCV 201

    Direction-aware Spatial Context Features for Shadow Detection

    Full text link
    Shadow detection is a fundamental and challenging task, since it requires an understanding of global image semantics and there are various backgrounds around shadows. This paper presents a novel network for shadow detection by analyzing image context in a direction-aware manner. To achieve this, we first formulate the direction-aware attention mechanism in a spatial recurrent neural network (RNN) by introducing attention weights when aggregating spatial context features in the RNN. By learning these weights through training, we can recover direction-aware spatial context (DSC) for detecting shadows. This design is developed into the DSC module and embedded in a CNN to learn DSC features at different levels. Moreover, a weighted cross entropy loss is designed to make the training more effective. We employ two common shadow detection benchmark datasets and perform various experiments to evaluate our network. Experimental results show that our network outperforms state-of-the-art methods and achieves 97% accuracy and 38% reduction on balance error rate.Comment: Accepted for oral presentation in CVPR 2018. The journal version of this paper is arXiv:1805.0463

    Cloud Shadow Detection and Removal from Aerial Photo Mosaics Using Light Detection and Ranging (LIDAR) Reflectance Images

    Get PDF
    The process of creating aerial photo mosaics can be severely affected by clouds and the shadows they create. In the CZMIL project discussed in this work, the aerial survey aircraft flies below the clouds, but the shadows cast from clouds above the aircraft cause the resultant mosaic image to have sub-optimal results. Large intensity variations, caused both from the cloud shadow within a single image and the juxtaposition of areas of cloud shadow and no cloud shadow during the image stitching process, create an image that may not be as useful to the concerned research scientist. Ideally, we would like to be able to detect such distortions and correct for them, effectively removing the effects of the cloud shadow from the mosaic. In this work, we present a method for identifying areas of cloud shadow within the image mosaic process, using supervised classification methods, and subsequently correcting these areas via several image matching and color correction techniques. Although the available data contained many extreme circumstances, we show that, in general, our decision to use LIDAR reflectance images to correctly classify cloud and not cloud pixels has been very successful, and is the fundamental basis for any color correction used to remove the cloud shadows. We also implement and discuss several color transformation methods which are used to correct the cloud shadow covered pixels, with the goal of producing a mosaic image which is free from cloud shadow effects

    ORGB: Offset Correction in RGB Color Space for Illumination-Robust Image Processing

    Full text link
    Single materials have colors which form straight lines in RGB space. However, in severe shadow cases, those lines do not intersect the origin, which is inconsistent with the description of most literature. This paper is concerned with the detection and correction of the offset between the intersection and origin. First, we analyze the reason for forming that offset via an optical imaging model. Second, we present a simple and effective way to detect and remove the offset. The resulting images, named ORGB, have almost the same appearance as the original RGB images while are more illumination-robust for color space conversion. Besides, image processing using ORGB instead of RGB is free from the interference of shadows. Finally, the proposed offset correction method is applied to road detection task, improving the performance both in quantitative and qualitative evaluations.Comment: Project website: https://baidut.github.io/ORGB
    • …
    corecore