68 research outputs found
Sparse graph regularized mesh color edit propagation
Mesh color edit propagation aims to propagate the color from a few color strokes to the whole mesh, which is useful for mesh colorization, color enhancement and color editing, etc. Compared with image edit propagation, luminance information is not available for 3D mesh data, so the color edit propagation is more difficult on 3D meshes than images, with far less research carried out. This paper proposes a novel solution based on sparse graph regularization. Firstly, a few color strokes are interactively drawn by the user, and then the color will be propagated to the whole mesh by minimizing a sparse graph regularized nonlinear energy function. The proposed method effectively measures geometric similarity over shapes by using a set of complementary multiscale feature descriptors, and effectively controls color bleeding via a sparse
â„“
1
optimization rather than quadratic minimization used in existing work. The proposed framework can be applied for the task of interactive mesh colorization, mesh color enhancement and mesh color editing. Extensive qualitative and quantitative experiments show that the proposed method outperforms the state-of-the-art methods
Fully Automatic Video Colorization with Self-Regularization and Diversity
We present a fully automatic approach to video colorization with
self-regularization and diversity. Our model contains a colorization network
for video frame colorization and a refinement network for spatiotemporal color
refinement. Without any labeled data, both networks can be trained with
self-regularized losses defined in bilateral and temporal space. The bilateral
loss enforces color consistency between neighboring pixels in a bilateral space
and the temporal loss imposes constraints between corresponding pixels in two
nearby frames. While video colorization is a multi-modal problem, our method
uses a perceptual loss with diversity to differentiate various modes in the
solution space. Perceptual experiments demonstrate that our approach
outperforms state-of-the-art approaches on fully automatic video colorization.
The results are shown in the supplementary video at
https://youtu.be/Y15uv2jnK-4Comment: Published at the Computer Vision and Pattern Recognition (CVPR), 201
Graph Spectral Image Processing
Recent advent of graph signal processing (GSP) has spurred intensive studies
of signals that live naturally on irregular data kernels described by graphs
(e.g., social networks, wireless sensor networks). Though a digital image
contains pixels that reside on a regularly sampled 2D grid, if one can design
an appropriate underlying graph connecting pixels with weights that reflect the
image structure, then one can interpret the image (or image patch) as a signal
on a graph, and apply GSP tools for processing and analysis of the signal in
graph spectral domain. In this article, we overview recent graph spectral
techniques in GSP specifically for image / video processing. The topics covered
include image compression, image restoration, image filtering and image
segmentation
Automatic example-based image colorization using location-aware cross-scale matching
Given a reference colour image and a destination grayscale image, this paper presents a novel automatic colourisation algorithm that transfers colour information from the reference image to the destination image. Since the reference and destination images may contain content at different or even varying scales (due to changes of distance between objects and the camera), existing texture matching based methods can often perform poorly. We propose a novel cross-scale texture matching method to improve the robustness and quality of the colourisation results. Suitable matching scales are considered locally, which are then fused using global optimisation that minimises both the matching errors and spatial change of scales. The minimisation is efficiently solved using a multi-label graph-cut algorithm. Since only low-level texture features are used, texture matching based colourisation can still produce semantically incorrect results, such as meadow appearing above the sky. We consider a class of semantic violation where the statistics of up-down relationships learnt from the reference image are violated and propose an effective method to identify and correct unreasonable colourisation. Finally, a novel nonlocal â„“1 optimisation framework is developed to propagate high confidence micro-scribbles to regions of lower confidence to produce a fully colourised image. Qualitative and quantitative evaluations show that our method outperforms several state-of-the-art methods
Multimodal Image Denoising based on Coupled Dictionary Learning
In this paper, we propose a new multimodal image denoising approach to
attenuate white Gaussian additive noise in a given image modality under the aid
of a guidance image modality. The proposed coupled image denoising approach
consists of two stages: coupled sparse coding and reconstruction. The first
stage performs joint sparse transform for multimodal images with respect to a
group of learned coupled dictionaries, followed by a shrinkage operation on the
sparse representations. Then, in the second stage, the shrunken
representations, together with coupled dictionaries, contribute to the
reconstruction of the denoised image via an inverse transform. The proposed
denoising scheme demonstrates the capability to capture both the common and
distinct features of different data modalities. This capability makes our
approach more robust to inconsistencies between the guidance and the target
images, thereby overcoming drawbacks such as the texture copying artifacts.
Experiments on real multimodal images demonstrate that the proposed approach is
able to better employ guidance information to bring notable benefits in the
image denoising task with respect to the state-of-the-art.Comment: 2018 IEEE International Conference on Image Processing (ICIP). arXiv
admin note: text overlap with arXiv:1806.0988
- …