6 research outputs found

    Filmy Cloud Removal on Satellite Imagery with Multispectral Conditional Generative Adversarial Nets

    Full text link
    In this paper, we propose a method for cloud removal from visible light RGB satellite images by extending the conditional Generative Adversarial Networks (cGANs) from RGB images to multispectral images. Satellite images have been widely utilized for various purposes, such as natural environment monitoring (pollution, forest or rivers), transportation improvement and prompt emergency response to disasters. However, the obscurity caused by clouds makes it unstable to monitor the situation on the ground with the visible light camera. Images captured by a longer wavelength are introduced to reduce the effects of clouds. Synthetic Aperture Radar (SAR) is such an example that improves visibility even the clouds exist. On the other hand, the spatial resolution decreases as the wavelength increases. Furthermore, the images captured by long wavelengths differs considerably from those captured by visible light in terms of their appearance. Therefore, we propose a network that can remove clouds and generate visible light images from the multispectral images taken as inputs. This is achieved by extending the input channels of cGANs to be compatible with multispectral images. The networks are trained to output images that are close to the ground truth using the images synthesized with clouds over the ground truth as inputs. In the available dataset, the proportion of images of the forest or the sea is very high, which will introduce bias in the training dataset if uniformly sampled from the original dataset. Thus, we utilize the t-Distributed Stochastic Neighbor Embedding (t-SNE) to improve the problem of bias in the training dataset. Finally, we confirm the feasibility of the proposed network on the dataset of four bands images, which include three visible light bands and one near-infrared (NIR) band

    Improving the Accuracy of Beauty Product Recommendations by Assessing Face Illumination Quality

    Full text link
    We focus on addressing the challenges in responsible beauty product recommendation, particularly when it involves comparing the product's color with a person's skin tone, such as for foundation and concealer products. To make accurate recommendations, it is crucial to infer both the product attributes and the product specific facial features such as skin conditions or tone. However, while many product photos are taken under good light conditions, face photos are taken from a wide range of conditions. The features extracted using the photos from ill-illuminated environment can be highly misleading or even be incompatible to be compared with the product attributes. Hence bad illumination condition can severely degrade quality of the recommendation. We introduce a machine learning framework for illumination assessment which classifies images into having either good or bad illumination condition. We then build an automatic user guidance tool which informs a user holding their camera if their illumination condition is good or bad. This way, the user is provided with rapid feedback and can interactively control how the photo is taken for their recommendation. Only a few studies are dedicated to this problem, mostly due to the lack of dataset that is large, labeled, and diverse both in terms of skin tones and light patterns. Lack of such dataset leads to neglecting skin tone diversity. Therefore, We begin by constructing a diverse synthetic dataset that simulates various skin tones and light patterns in addition to an existing facial image dataset. Next, we train a Convolutional Neural Network (CNN) for illumination assessment that outperforms the existing solutions using the synthetic dataset. Finally, we analyze how the our work improves the shade recommendation for various foundation products.Comment: 7 pages, 5 figures. Presented in FAccTRec202

    COLOR MAPPING FOR CAMERA-BASED COLOR CALIBRATION AND COLOR TRANSFER

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Improving the Pipeline for Stereo Post-Production

    Get PDF

    A Survey of Color Mapping and its Applications

    No full text
    International audienceColor mapping or color transfer methods aim to recolor a given image or video by deriving a mapping between that image and another image serving as a reference. This class of methods has received considerable attention in recent years, both in academic literature and in industrial applications. Methods for recoloring images have often appeared under the labels of color correction, color transfer or color balancing, to name a few, but their goal is always the same: mapping the colors of one image to another. In this report, we present a comprehensive overview of these methods and offer a classification of current solutions depending not only on their algorithmic formulation but also their range of applications. We discuss the relative merit of each class of techniques through examples and show how color mapping solutions can and have been applied to a diverse range of problems
    corecore