150 research outputs found
Example-based image colorization using locality consistent sparse representation
—Image colorization aims to produce a natural looking color image from a given grayscale image, which remains a challenging problem. In this paper, we propose a novel examplebased image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target grayscale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target grayscale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms state-ofthe-art methods, both visually and quantitatively using a user stud
Sparse graph regularized mesh color edit propagation
Mesh color edit propagation aims to propagate the color from a few color strokes to the whole mesh, which is useful for mesh colorization, color enhancement and color editing, etc. Compared with image edit propagation, luminance information is not available for 3D mesh data, so the color edit propagation is more difficult on 3D meshes than images, with far less research carried out. This paper proposes a novel solution based on sparse graph regularization. Firstly, a few color strokes are interactively drawn by the user, and then the color will be propagated to the whole mesh by minimizing a sparse graph regularized nonlinear energy function. The proposed method effectively measures geometric similarity over shapes by using a set of complementary multiscale feature descriptors, and effectively controls color bleeding via a sparse
â„“
1
optimization rather than quadratic minimization used in existing work. The proposed framework can be applied for the task of interactive mesh colorization, mesh color enhancement and mesh color editing. Extensive qualitative and quantitative experiments show that the proposed method outperforms the state-of-the-art methods
Coupled Depth Learning
In this paper we propose a method for estimating depth from a single image
using a coarse to fine approach. We argue that modeling the fine depth details
is easier after a coarse depth map has been computed. We express a global
(coarse) depth map of an image as a linear combination of a depth basis learned
from training examples. The depth basis captures spatial and statistical
regularities and reduces the problem of global depth estimation to the task of
predicting the input-specific coefficients in the linear combination. This is
formulated as a regression problem from a holistic representation of the image.
Crucially, the depth basis and the regression function are {\bf coupled} and
jointly optimized by our learning scheme. We demonstrate that this results in a
significant improvement in accuracy compared to direct regression of depth
pixel values or approaches learning the depth basis disjointly from the
regression function. The global depth estimate is then used as a guidance by a
local refinement method that introduces depth details that were not captured at
the global level. Experiments on the NYUv2 and KITTI datasets show that our
method outperforms the existing state-of-the-art at a considerably lower
computational cost for both training and testing.Comment: 10 pages, 3 Figures, 4 Tables with quantitative evaluation
Example-based image colorization via automatic feature selection and fusion
Image colorization is an important and difficult problem in image processing with various
applications including image stylization and heritage restoration. Most existing
image colorization methods utilize feature matching between the reference color image
and the target grayscale image. The effectiveness of features is often significantly
affected by the characteristics of the local image region. Traditional methods usually
combine multiple features to improve the matching performance. However, the same
set of features is still applied to the whole images. In this paper, based on the observation
that local regions have different characteristics and hence different features may
work more effectively, we propose a novel image colorization method using automatic
feature selection with the results fused via a Markov Random Field (MRF) model for
improved consistency. More specifically, the proposed algorithm automatically classifies
image regions as either uniform or non-uniform, and selects a suitable feature
vector for each local patch of the target image to determine the colorization results.
For this purpose, a descriptor based on luminance deviation is used to estimate the
probability of each patch being uniform or non-uniform, and the same descriptor is
also used for calculating the label cost of the MRF model to determine which feature
vector should be selected for each patch. In addition, the similarity between the luminance
of the neighborhood is used as the smoothness cost for the MRF model which enhances the local consistency of the colorization results. Experimental results on a variety
of images show that our method outperforms several state-of-the-art algorithms,
both visually and quantitatively using standard measures and a user study
- …