4 research outputs found

    Deep Video Color Propagation

    Full text link
    Traditional approaches for color propagation in videos rely on some form of matching between consecutive video frames. Using appearance descriptors, colors are then propagated both spatially and temporally. These methods, however, are computationally expensive and do not take advantage of semantic information of the scene. In this work we propose a deep learning framework for color propagation that combines a local strategy, to propagate colors frame-by-frame ensuring temporal stability, and a global strategy, using semantics for color propagation within a longer range. Our evaluation shows the superiority of our strategy over existing video and image color propagation methods as well as neural photo-realistic style transfer approaches.Comment: BMVC 201

    A Fast Near-Infrared Image Colorization Deep Learning Mode

    Get PDF
    Near-infrared(NIR) image colorization is the main research content in the field of current near-infrared image application. It has a wide range of application value. For the problem of image colorization, such as diffuse color and even color error, and can not be automated, A fast near-infrared image colorization model consisting of a lightweight image recognition network module and an image colorization CNN module with a fusion layer, firstly using a lightweight image recognition network for image recognition of near-infrared images, and then selecting from the IamgeNet image library The image of the same class as the scene is used as the training set of the colorized network. After training with the colored CNN module with the fusion layer, the near-infrared image is input as the testing set for colorization. The experimental results show that the color is colored by the algorithm. The image details are clear, the color transfer effect is good and the running speed is fast

    Automatic image colorization

    Get PDF
    Treballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2016, Director: Santi Seguí MesquidaColorizing is the act of giving color to grayscale images. A convolutional-neural-network-based method to colorize images without human interaction is presented in this project. Various frameworks, architectures, color spaces and approximations are explored to obtain the final model, capable of correctly restoring the original color of photographies without any further information than the image itself. The principal aim of this project is to propose an idempotent architecture that could be trained with all kinds of images and yet produce good results. To demonstrate how the process works and show the obtained results, three categories of images will be used along this project: synthetic images representing numbers, landscape images and human faces

    colorization using the rotation-invariant feature space

    No full text
    Current colorization based on image segmentation makes it difficult to add or update color reliably and requires considerable user intervention. A new approach gives similar colors to pixels with similar texture features. To do this, it uses rotation-invariant Gabor filter banks and applies optimization in the feature space.Hong Kong Research Grants Council416007, 415806; National Grand Fundamental Research 973 Program2009CB320802; University of MacauCurrent colorization based on image segmentation makes it difficult to add or update color reliably and requires considerable user intervention. A new approach gives similar colors to pixels with similar texture features. To do this, it uses rotation-invariant Gabor filter banks and applies optimization in the feature space
    corecore