26,773 research outputs found
Deep Video Color Propagation
Traditional approaches for color propagation in videos rely on some form of
matching between consecutive video frames. Using appearance descriptors, colors
are then propagated both spatially and temporally. These methods, however, are
computationally expensive and do not take advantage of semantic information of
the scene. In this work we propose a deep learning framework for color
propagation that combines a local strategy, to propagate colors frame-by-frame
ensuring temporal stability, and a global strategy, using semantics for color
propagation within a longer range. Our evaluation shows the superiority of our
strategy over existing video and image color propagation methods as well as
neural photo-realistic style transfer approaches.Comment: BMVC 201
Directive local color transfer based on dynamic look-up table
publishedVersionUnit-agreemen
Self-supervised Spatio-temporal Representation Learning for Videos by Predicting Motion and Appearance Statistics
We address the problem of video representation learning without
human-annotated labels. While previous efforts address the problem by designing
novel self-supervised tasks using video data, the learned features are merely
on a frame-by-frame basis, which are not applicable to many video analytic
tasks where spatio-temporal features are prevailing. In this paper we propose a
novel self-supervised approach to learn spatio-temporal features for video
representation. Inspired by the success of two-stream approaches in video
classification, we propose to learn visual features by regressing both motion
and appearance statistics along spatial and temporal dimensions, given only the
input video data. Specifically, we extract statistical concepts (fast-motion
region and the corresponding dominant direction, spatio-temporal color
diversity, dominant color, etc.) from simple patterns in both spatial and
temporal domains. Unlike prior puzzles that are even hard for humans to solve,
the proposed approach is consistent with human inherent visual habits and
therefore easy to answer. We conduct extensive experiments with C3D to validate
the effectiveness of our proposed approach. The experiments show that our
approach can significantly improve the performance of C3D when applied to video
classification tasks. Code is available at
https://github.com/laura-wang/video_repres_mas.Comment: CVPR 201
The relation between color spaces and compositional data analysis demonstrated with magnetic resonance image processing applications
This paper presents a novel application of compositional data analysis
methods in the context of color image processing. A vector decomposition method
is proposed to reveal compositional components of any vector with positive
components followed by compositional data analysis to demonstrate the relation
between color space concepts such as hue and saturation to their compositional
counterparts. The proposed methods are applied to a magnetic resonance imaging
dataset acquired from a living human brain and a digital color photograph to
perform image fusion. Potential future applications in magnetic resonance
imaging are mentioned and the benefits/disadvantages of the proposed methods
are discussed in terms of color image processing.Comment: 13 pages, 3 figures, short paper, submitted to Austrian Journal of
Statistics compositional data analysis special issue, first revision, fix
rendering error in fig
- …