17 research outputs found

    Unsupervised Diverse Colorization via Generative Adversarial Networks

    Full text link
    Colorization of grayscale images has been a hot topic in computer vision. Previous research mainly focuses on producing a colored image to match the original one. However, since many colors share the same gray value, an input grayscale image could be diversely colored while maintaining its reality. In this paper, we design a novel solution for unsupervised diverse colorization. Specifically, we leverage conditional generative adversarial networks to model the distribution of real-world item colors, in which we develop a fully convolutional generator with multi-layer noise to enhance diversity, with multi-layer condition concatenation to maintain reality, and with stride 1 to keep spatial information. With such a novel network architecture, the model yields highly competitive performance on the open LSUN bedroom dataset. The Turing test of 80 humans further indicates our generated color schemes are highly convincible

    Generative Sensing: Transforming Unreliable Sensor Data for Reliable Recognition

    Full text link
    This paper introduces a deep learning enabled generative sensing framework which integrates low-end sensors with computational intelligence to attain a high recognition accuracy on par with that attained with high-end sensors. The proposed generative sensing framework aims at transforming low-end, low-quality sensor data into higher quality sensor data in terms of achieved classification accuracy. The low-end data can be transformed into higher quality data of the same modality or into data of another modality. Different from existing methods for image generation, the proposed framework is based on discriminative models and targets to maximize the recognition accuracy rather than a similarity measure. This is achieved through the introduction of selective feature regeneration in a deep neural network (DNN). The proposed generative sensing will essentially transform low-quality sensor data into high-quality information for robust perception. Results are presented to illustrate the performance of the proposed framework.Comment: 5 pages, Submitted to IEEE MIPR 201

    Past to Present (P2P): Road Thermal Image Colorization

    Get PDF
    Thermal image colorization into realistic RGB image is a challenging task. Thermal cameras are easily to detect objects in particular situation (e.g. darkness and fog) that the human eyes cannot detect. However, it is difficult to interpret the thermal image with human eyes. Enhancing thermal image colorization is an important task to improve these areas. The results of the existing colorization method still have color ambiguities, distortion, and blurriness problems. This paper focused on thermal image colorization using pix2pix network architecture based on Generative Adversarial Net (GAN). Pix2pix is a model that transforms thermal image into RGB image, but our proposed model used three input types of images which are present as frame thermal image, present frame RGB image, and previous frame RGB image. By extracting the color information (i.e. luminance and chrominance) of the previous frame RGB image, the result obtained a more realistic RGB image. Experiments use two kinds of evaluation method, which are quantitative measure and qualitative measure. First, quantitative measure is the calculation of specific numerical scores, the method names are PSNR and SSIM. Second, qualitative measure is human subjective evaluation. Evaluation method compared and evaluated pix2pix and our proposed method with the two types of measuring method

    Colorization of Multispectral Image Fusion using Convolutional Neural Network approach

    Get PDF
    The proposed technique  offers a significant advantage in enhancing multiband nighttime imagery for surveillance and navigation purposes., The multi-band image data set comprises visual  and infrared  motion sequences with various military and civilian surveillance scenarios which include people that are stationary, walking or running, Vehicles and buildings or other man-made structures. Colorization method led to provide superior discrimination, identification of objects (Lesions), faster reaction times and an increased scene understanding than monochrome fused image. The guided filtering approach is used to decompose the source images hence they are divided into two parts: approximation part and detail content part further the weighted-averaging method is used to fuse the approximation part. The multi-layer features are extracted from the detail content part using the VGG-19 network. Finally, the approximation part and detail content part will be combined to reconstruct the fused image. The proposed approach has offers better outcomes equated to prevailing state-of-the-art techniques in terms of quantitative and qualitative parameters. In future, propose technique will help Battlefield monitoring, Defence for situation awareness, Surveillance, Target tracking and Person authentication

    An Integrated Enhancement Solution for 24-hour Colorful Imaging

    Full text link
    The current industry practice for 24-hour outdoor imaging is to use a silicon camera supplemented with near-infrared (NIR) illumination. This will result in color images with poor contrast at daytime and absence of chrominance at nighttime. For this dilemma, all existing solutions try to capture RGB and NIR images separately. However, they need additional hardware support and suffer from various drawbacks, including short service life, high price, specific usage scenario, etc. In this paper, we propose a novel and integrated enhancement solution that produces clear color images, whether at abundant sunlight daytime or extremely low-light nighttime. Our key idea is to separate the VIS and NIR information from mixed signals, and enhance the VIS signal adaptively with the NIR signal as assistance. To this end, we build an optical system to collect a new VIS-NIR-MIX dataset and present a physically meaningful image processing algorithm based on CNN. Extensive experiments show outstanding results, which demonstrate the effectiveness of our solution.Comment: AAAI 2020 (Oral
    corecore