3,308 research outputs found
Deep Directional Statistics: Pose Estimation with Uncertainty Quantification
Modern deep learning systems successfully solve many perception tasks such as
object pose estimation when the input image is of high quality. However, in
challenging imaging conditions such as on low-resolution images or when the
image is corrupted by imaging artifacts, current systems degrade considerably
in accuracy. While a loss in performance is unavoidable, we would like our
models to quantify their uncertainty in order to achieve robustness against
images of varying quality. Probabilistic deep learning models combine the
expressive power of deep learning with uncertainty quantification. In this
paper, we propose a novel probabilistic deep learning model for the task of
angular regression. Our model uses von Mises distributions to predict a
distribution over object pose angle. Whereas a single von Mises distribution is
making strong assumptions about the shape of the distribution, we extend the
basic model to predict a mixture of von Mises distributions. We show how to
learn a mixture model using a finite and infinite number of mixture components.
Our model allows for likelihood-based training and efficient inference at test
time. We demonstrate on a number of challenging pose estimation datasets that
our model produces calibrated probability predictions and competitive or
superior point estimates compared to the current state-of-the-art
People tracking and re-identification by face recognition for RGB-D camera networks
This paper describes a face recognition-based people tracking and re-identification system for RGB-D camera networks. The system tracks people and learns their faces online to keep track of their identities even if they move out from the camera's field of view once. For robust people re-identification, the system exploits the combination of a deep neural network- based face representation and a Bayesian inference-based face classification method. The system also provides a predefined people identification capability: it associates the online learned faces with predefined people face images and names to know the people's whereabouts, thus, allowing a rich human-system interaction. Through experiments, we validate the re-identification and the predefined people identification capabilities of the system and show an example of the integration of the system with a mobile robot. The overall system is built as a Robot Operating System (ROS) module. As a result, it simplifies the integration with the many existing robotic systems and algorithms which use such middleware. The code of this work has been released as open-source in order to provide a baseline for the future publications in this field
Learning Discriminative Features with Class Encoder
Deep neural networks usually benefit from unsupervised pre-training, e.g.
auto-encoders. However, the classifier further needs supervised fine-tuning
methods for good discrimination. Besides, due to the limits of full-connection,
the application of auto-encoders is usually limited to small, well aligned
images. In this paper, we incorporate the supervised information to propose a
novel formulation, namely class-encoder, whose training objective is to
reconstruct a sample from another one of which the labels are identical.
Class-encoder aims to minimize the intra-class variations in the feature space,
and to learn a good discriminative manifolds on a class scale. We impose the
class-encoder as a constraint into the softmax for better supervised training,
and extend the reconstruction on feature-level to tackle the parameter size
issue and translation issue. The experiments show that the class-encoder helps
to improve the performance on benchmarks of classification and face
recognition. This could also be a promising direction for fast training of face
recognition models.Comment: Accepted by CVPR2016 Workshop of Robust Features for Computer Visio
Emotion Recognition in the Wild using Deep Neural Networks and Bayesian Classifiers
Group emotion recognition in the wild is a challenging problem, due to the
unstructured environments in which everyday life pictures are taken. Some of
the obstacles for an effective classification are occlusions, variable lighting
conditions, and image quality. In this work we present a solution based on a
novel combination of deep neural networks and Bayesian classifiers. The neural
network works on a bottom-up approach, analyzing emotions expressed by isolated
faces. The Bayesian classifier estimates a global emotion integrating top-down
features obtained through a scene descriptor. In order to validate the system
we tested the framework on the dataset released for the Emotion Recognition in
the Wild Challenge 2017. Our method achieved an accuracy of 64.68% on the test
set, significantly outperforming the 53.62% competition baseline.Comment: accepted by the Fifth Emotion Recognition in the Wild (EmotiW)
Challenge 201
- …