6,120 research outputs found
Stochastic Prediction of Multi-Agent Interactions from Partial Observations
We present a method that learns to integrate temporal information, from a
learned dynamics model, with ambiguous visual information, from a learned
vision model, in the context of interacting agents. Our method is based on a
graph-structured variational recurrent neural network (Graph-VRNN), which is
trained end-to-end to infer the current state of the (partially observed)
world, as well as to forecast future states. We show that our method
outperforms various baselines on two sports datasets, one based on real
basketball trajectories, and one generated by a soccer game engine.Comment: ICLR 2019 camera read
Recommended from our members
Tracking the affective state of unseen persons.
Emotion recognition is an essential human ability critical for social functioning. It is widely assumed that identifying facial expression is the key to this, and models of emotion recognition have mainly focused on facial and bodily features in static, unnatural conditions. We developed a method called affective tracking to reveal and quantify the enormous contribution of visual context to affect (valence and arousal) perception. When characters' faces and bodies were masked in silent videos, viewers inferred the affect of the invisible characters successfully and in high agreement based solely on visual context. We further show that the context is not only sufficient but also necessary to accurately perceive human affect over time, as it provides a substantial and unique contribution beyond the information available from face and body. Our method (which we have made publicly available) reveals that emotion recognition is, at its heart, an issue of context as much as it is about faces
First impressions: A survey on vision-based apparent personality trait analysis
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft
Indirect Match Highlights Detection with Deep Convolutional Neural Networks
Highlights in a sport video are usually referred as actions that stimulate
excitement or attract attention of the audience. A big effort is spent in
designing techniques which find automatically highlights, in order to
automatize the otherwise manual editing process. Most of the state-of-the-art
approaches try to solve the problem by training a classifier using the
information extracted on the tv-like framing of players playing on the game
pitch, learning to detect game actions which are labeled by human observers
according to their perception of highlight. Obviously, this is a long and
expensive work. In this paper, we reverse the paradigm: instead of looking at
the gameplay, inferring what could be exciting for the audience, we directly
analyze the audience behavior, which we assume is triggered by events happening
during the game. We apply deep 3D Convolutional Neural Network (3D-CNN) to
extract visual features from cropped video recordings of the supporters that
are attending the event. Outputs of the crops belonging to the same frame are
then accumulated to produce a value indicating the Highlight Likelihood (HL)
which is then used to discriminate between positive (i.e. when a highlight
occurs) and negative samples (i.e. standard play or time-outs). Experimental
results on a public dataset of ice-hockey matches demonstrate the effectiveness
of our method and promote further research in this new exciting direction.Comment: "Social Signal Processing and Beyond" workshop, in conjunction with
ICIAP 201
Boosting Image-based Mutual Gaze Detection using Pseudo 3D Gaze
Mutual gaze detection, i.e., predicting whether or not two people are looking
at each other, plays an important role in understanding human interactions. In
this work, we focus on the task of image-based mutual gaze detection, and
propose a simple and effective approach to boost the performance by using an
auxiliary 3D gaze estimation task during the training phase. We achieve the
performance boost without additional labeling cost by training the 3D gaze
estimation branch using pseudo 3D gaze labels deduced from mutual gaze labels.
By sharing the head image encoder between the 3D gaze estimation and the mutual
gaze detection branches, we achieve better head features than learned by
training the mutual gaze detection branch alone. Experimental results on three
image datasets show that the proposed approach improves the detection
performance significantly without additional annotations. This work also
introduces a new image dataset that consists of 33.1K pairs of humans annotated
with mutual gaze labels in 29.2K images
- …