180 research outputs found

    Near-Infrared Guided Color Image Dehazing

    Get PDF
    Near-infrared (NIR) light has stronger penetration capability than visible light due to its long wavelength, thus being less scattered by particles in the air. This makes it desirable for image dehazing to unveil details of distant objects in landscape photographs. In this paper, we propose an improved image dehazing scheme using a pair of color and NIR images, which effectively estimates the airlight color and transfers details from the NIR. A two-stage dehazing method is proposed by exploiting the dissimilarity between RGB and NIR for airlight color estimation, followed by a dehazing procedure through an optimization framework. Experiments on captured haze images show that our method can achieve substantial improvements on the detail recovery and the color distribution over the existing image dehazing algorithms

    Filmy Cloud Removal on Satellite Imagery with Multispectral Conditional Generative Adversarial Nets

    Full text link
    In this paper, we propose a method for cloud removal from visible light RGB satellite images by extending the conditional Generative Adversarial Networks (cGANs) from RGB images to multispectral images. Satellite images have been widely utilized for various purposes, such as natural environment monitoring (pollution, forest or rivers), transportation improvement and prompt emergency response to disasters. However, the obscurity caused by clouds makes it unstable to monitor the situation on the ground with the visible light camera. Images captured by a longer wavelength are introduced to reduce the effects of clouds. Synthetic Aperture Radar (SAR) is such an example that improves visibility even the clouds exist. On the other hand, the spatial resolution decreases as the wavelength increases. Furthermore, the images captured by long wavelengths differs considerably from those captured by visible light in terms of their appearance. Therefore, we propose a network that can remove clouds and generate visible light images from the multispectral images taken as inputs. This is achieved by extending the input channels of cGANs to be compatible with multispectral images. The networks are trained to output images that are close to the ground truth using the images synthesized with clouds over the ground truth as inputs. In the available dataset, the proportion of images of the forest or the sea is very high, which will introduce bias in the training dataset if uniformly sampled from the original dataset. Thus, we utilize the t-Distributed Stochastic Neighbor Embedding (t-SNE) to improve the problem of bias in the training dataset. Finally, we confirm the feasibility of the proposed network on the dataset of four bands images, which include three visible light bands and one near-infrared (NIR) band

    Does Dehazing Model Preserve Color Information?

    No full text
    International audience—Image dehazing aims at estimating the image information lost caused by the presence of fog, haze and smoke in the scene during acquisition. Degradation causes a loss in contrast and color information, thus enhancement becomes an inevitable task in imaging applications and consumer photography. Color information has been mostly evaluated perceptually along with quality, but no work addresses specifically this aspect. We demonstrate how dehazing model affects color information on simulated and real images. We use a convergence model from perception of transparency to simulate haze on images. We evaluate color loss in terms of angle of hue in IPT color space, saturation in CIE LUV color space and perceived color difference in CIE LAB color space. Results indicate that saturation is critically changed and hue is changed for achromatic colors and blue/yellow colors, where usual image processing space are not showing constant hue lines. we suggest that a correction model based on color transparency perception could help to retrieve color information as an additive layer on dehazing algorithms

    An RGB-NIR Image Fusion Method for Improving Feature Matching

    Get PDF
    The quality of RGB images can be degraded by poor weather or lighting conditions. Thus, to make computer vision techniques work correctly, images need to be enhanced first. This paper proposes an RGB image enhancement method for improving feature matching which is a core step in most computer vision techniques. The proposed method decomposes near-infrared (NIR) image into fine detail, medium detail, and base images by using weighted least squares filters (WLSF) and boosts the medium detail image. Then, the fine and boosted medium detail images are combined, and the combined NIR detail image replaces the luminance detail image of an RGB image. Experiments demonstrates that the proposed method can effectively enhance RGB image; hence more stable image features are extracted. In addition, the method can minimize the loss of the useful visual (or optical) information of the original RGB image that can be used for other vision tasks

    Scattering Removal for Finger-Vein Image Restoration

    Get PDF
    Finger-vein recognition has received increased attention recently. However, the finger-vein images are always captured in poor quality. This certainly makes finger-vein feature representation unreliable, and further impairs the accuracy of finger-vein recognition. In this paper, we first give an analysis of the intrinsic factors causing finger-vein image degradation, and then propose a simple but effective image restoration method based on scattering removal. To give a proper description of finger-vein image degradation, a biological optical model (BOM) specific to finger-vein imaging is proposed according to the principle of light propagation in biological tissues. Based on BOM, the light scattering component is sensibly estimated and properly removed for finger-vein image restoration. Finally, experimental results demonstrate that the proposed method is powerful in enhancing the finger-vein image contrast and in improving the finger-vein image matching accuracy

    Survey of the Image Dehazing Using Colour Attenuation and Dark Channel Techniques

    Get PDF
    Fog is primaryreasonwhich degradesexternal images. Engenders toward the camera, a bit of the light meets these suspended particles. We analysed distinctive haze related highlights in a learning structure to recognize the best element mix for picture dehazing. Haze occurs due to suspended particles, like minerals, sand, also, microscopic fish that exist in waterways, seas, and lake. Reflected light via objects induces toward the camera, a touch of the light meets these suspended particles. We broke down unmistakable dimness related features in a learning structure to perceive the best component blend for picture dehazing. We also reviewed previous research done by experts and their proposed theories. Furthermore we studied the process of haze removal and few techniques used for dehazing

    Automated Cloud Removal on High-Altitude UAV Imagery Through Deep Learning on Synthetic Data

    Get PDF
    New theories and applications of deep learning have been discovered and implemented within the field of machine learning recently. The high degree of effectiveness of deep learning models span across many domains including image processing and enhancement. Specifically, the automated removal of clouds, smoke, and haze from images has become a prominent and pertinent field of research. In this paper, I propose an analysis and synthetic training data variant for the All-in-One Dehazing Network (AOD-Net) architecture that performs better on removing clouds and haze; most specifically on high altitude unmanned aerial vehicles (UAVs) images
    corecore