36 research outputs found
Is the Pedestrian going to Cross? Answering by 2D Pose Estimation
Our recent work suggests that, thanks to nowadays powerful CNNs, image-based
2D pose estimation is a promising cue for determining pedestrian intentions
such as crossing the road in the path of the ego-vehicle, stopping before
entering the road, and starting to walk or bending towards the road. This
statement is based on the results obtained on non-naturalistic sequences
(Daimler dataset), i.e. in sequences choreographed specifically for performing
the study. Fortunately, a new publicly available dataset (JAAD) has appeared
recently to allow developing methods for detecting pedestrian intentions in
naturalistic driving conditions; more specifically, for addressing the relevant
question is the pedestrian going to cross? Accordingly, in this paper we use
JAAD to assess the usefulness of 2D pose estimation for answering such a
question. We combine CNN-based pedestrian detection, tracking and pose
estimation to predict the crossing action from monocular images. Overall, the
proposed pipeline provides new state-of-the-art results.Comment: This is a paper presented in IEEE Intelligent Vehicles Symposium
(IEEE IV 2018
Multichannel Attention Network for Analyzing Visual Behavior in Public Speaking
Public speaking is an important aspect of human communication and
interaction. The majority of computational work on public speaking concentrates
on analyzing the spoken content, and the verbal behavior of the speakers. While
the success of public speaking largely depends on the content of the talk, and
the verbal behavior, non-verbal (visual) cues, such as gestures and physical
appearance also play a significant role. This paper investigates the importance
of visual cues by estimating their contribution towards predicting the
popularity of a public lecture. For this purpose, we constructed a large
database of more than TED talk videos. As a measure of popularity of the
TED talks, we leverage the corresponding (online) viewers' ratings from
YouTube. Visual cues related to facial and physical appearance, facial
expressions, and pose variations are extracted from the video frames using
convolutional neural network (CNN) models. Thereafter, an attention-based long
short-term memory (LSTM) network is proposed to predict the video popularity
from the sequence of visual features. The proposed network achieves
state-of-the-art prediction accuracy indicating that visual cues alone contain
highly predictive information about the popularity of a talk. Furthermore, our
network learns a human-like attention mechanism, which is particularly useful
for interpretability, i.e. how attention varies with time, and across different
visual cues by indicating their relative importance
Cross-stitch Networks for Multi-task Learning
Multi-task learning in Convolutional Networks has displayed remarkable
success in the field of recognition. This success can be largely attributed to
learning shared representations from multiple supervisory tasks. However,
existing multi-task approaches rely on enumerating multiple network
architectures specific to the tasks at hand, that do not generalize. In this
paper, we propose a principled approach to learn shared representations in
ConvNets using multi-task learning. Specifically, we propose a new sharing
unit: "cross-stitch" unit. These units combine the activations from multiple
networks and can be trained end-to-end. A network with cross-stitch units can
learn an optimal combination of shared and task-specific representations. Our
proposed method generalizes across multiple tasks and shows dramatically
improved performance over baseline methods for categories with few training
examples.Comment: To appear in CVPR 2016 (Spotlight