40 research outputs found
Pose Embeddings: A Deep Architecture for Learning to Match Human Poses
We present a method for learning an embedding that places images of humans in
similar poses nearby. This embedding can be used as a direct method of
comparing images based on human pose, avoiding potential challenges of
estimating body joint positions. Pose embedding learning is formulated under a
triplet-based distance criterion. A deep architecture is used to allow learning
of a representation capable of making distinctions between different poses.
Experiments on human pose matching and retrieval from video data demonstrate
the potential of the method
Beyond Short Snippets: Deep Networks for Video Classification
Convolutional neural networks (CNNs) have been extensively applied for image
recognition problems giving state-of-the-art results on recognition, detection,
segmentation and retrieval. In this work we propose and evaluate several deep
neural network architectures to combine image information across a video over
longer time periods than previously attempted. We propose two methods capable
of handling full length videos. The first method explores various convolutional
temporal feature pooling architectures, examining the various design choices
which need to be made when adapting a CNN for this task. The second proposed
method explicitly models the video as an ordered sequence of frames. For this
purpose we employ a recurrent neural network that uses Long Short-Term Memory
(LSTM) cells which are connected to the output of the underlying CNN. Our best
networks exhibit significant performance improvements over previously published
results on the Sports 1 million dataset (73.1% vs. 60.9%) and the UCF-101
datasets with (88.6% vs. 88.0%) and without additional optical flow information
(82.6% vs. 72.8%)
Full Resolution Image Compression with Recurrent Neural Networks
This paper presents a set of full-resolution lossy image compression methods
based on neural networks. Each of the architectures we describe can provide
variable compression rates during deployment without requiring retraining of
the network: each network need only be trained once. All of our architectures
consist of a recurrent neural network (RNN)-based encoder and decoder, a
binarizer, and a neural network for entropy coding. We compare RNN types (LSTM,
associative LSTM) and introduce a new hybrid of GRU and ResNet. We also study
"one-shot" versus additive reconstruction architectures and introduce a new
scaled-additive framework. We compare to previous work, showing improvements of
4.3%-8.8% AUC (area under the rate-distortion curve), depending on the
perceptual metric used. As far as we know, this is the first neural network
architecture that is able to outperform JPEG at image compression across most
bitrates on the rate-distortion curve on the Kodak dataset images, with and
without the aid of entropy coding.Comment: Updated with content for CVPR and removed supplemental material to an
external link for size limitation
Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks
We propose a method for lossy image compression based on recurrent,
convolutional neural networks that outperforms BPG (4:2:0 ), WebP, JPEG2000,
and JPEG as measured by MS-SSIM. We introduce three improvements over previous
research that lead to this state-of-the-art result. First, we show that
training with a pixel-wise loss weighted by SSIM increases reconstruction
quality according to several metrics. Second, we modify the recurrent
architecture to improve spatial diffusion, which allows the network to more
effectively capture and propagate image information through the network's
hidden state. Finally, in addition to lossless entropy coding, we use a
spatially adaptive bit allocation algorithm to more efficiently use the limited
number of bits to encode visually complex image regions. We evaluate our method
on the Kodak and Tecnick image sets and compare against standard codecs as well
recently published methods based on deep neural networks
Towards a Semantic Perceptual Image Metric
We present a full reference, perceptual image metric based on VGG-16, an
artificial neural network trained on object classification. We fit the metric
to a new database based on 140k unique images annotated with ground truth by
human raters who received minimal instruction. The resulting metric shows
competitive performance on TID 2013, a database widely used to assess image
quality assessments methods. More interestingly, it shows strong responses to
objects potentially carrying semantic relevance such as faces and text, which
we demonstrate using a visualization technique and ablation experiments. In
effect, the metric appears to model a higher influence of semantic context on
judgments, which we observe particularly in untrained raters. As the vast
majority of users of image processing systems are unfamiliar with Image Quality
Assessment (IQA) tasks, these findings may have significant impact on
real-world applications of perceptual metrics
Bidirectional relighting for 3D-aided 2D face recognition
In this paper, we present a new method for bidirectional relighting for 3D-aided 2D face recognition under large pose and illumination changes. During subject enrollment, we build subject-specific 3D annotated models by using the subjects' raw 3D data and 2D texture. During authentication, the probe 2D images are projected onto a normalized image space using the subject-specific 3D model in the gallery. Then, a bidirectional relighting algorithm and two similarity metrics (a view-dependent complex wavelet structural similarity and a global similarity) are employed to compare the gallery and probe. We tested our algorithms on the UHDB11 and UHDB12 databases that contain 3D data with probe images under large lighting and pose variations. The experimental results show the robustness of our approach in recognizing faces in difficult situations
AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions
This paper introduces a video dataset of spatio-temporally localized Atomic
Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual
actions in 430 15-minute video clips, where actions are localized in space and
time, resulting in 1.58M action labels with multiple labels per person
occurring frequently. The key characteristics of our dataset are: (1) the
definition of atomic visual actions, rather than composite actions; (2) precise
spatio-temporal annotations with possibly multiple annotations for each person;
(3) exhaustive annotation of these atomic actions over 15-minute video clips;
(4) people temporally linked across consecutive segments; and (5) using movies
to gather a varied set of action representations. This departs from existing
datasets for spatio-temporal action recognition, which typically provide sparse
annotations for composite actions in short video clips. We will release the
dataset publicly.
AVA, with its realistic scene and action complexity, exposes the intrinsic
difficulty of action recognition. To benchmark this, we present a novel
approach for action localization that builds upon the current state-of-the-art
methods, and demonstrates better performance on JHMDB and UCF101-24 categories.
While setting a new state of the art on existing datasets, the overall results
on AVA are low at 15.6% mAP, underscoring the need for developing new
approaches for video understanding.Comment: To appear in CVPR 2018. Check dataset page
https://research.google.com/ava/ for detail
