87,500 research outputs found
Utilising visual attention cues for vehicle detection and tracking
Advanced Driver-Assistance Systems (ADAS) have been attracting attention from many researchers. Vision-based sensors are the closest way to emulate human driver visual behaviour while driving. In this paper, we explore possible ways to use visual attention (saliency) for object detection and tracking. We investigate: 1) How a visual attention map such as a subjectness attention or saliency map and an objectness attention map can facilitate region proposal generation in a 2- stage object detector; 2) How a visual attention map can be used for tracking multiple objects. We propose a neural network that can simultaneously detect objects as and generate objectness and subjectness maps to save computational power. We further exploit the visual attention map during tracking using a sequential Monte Carlo probability hypothesis density (PHD) filter. The experiments are conducted on KITTI and DETRAC datasets. The use of visual attention and hierarchical features has shown a considerable improvement of ≈8% in object detection which effectively increased tracking performance by ≈4% on KITTI dataset
Tracking Gaze and Visual Focus of Attention of People Involved in Social Interaction
The visual focus of attention (VFOA) has been recognized as a prominent
conversational cue. We are interested in estimating and tracking the VFOAs
associated with multi-party social interactions. We note that in this type of
situations the participants either look at each other or at an object of
interest; therefore their eyes are not always visible. Consequently both gaze
and VFOA estimation cannot be based on eye detection and tracking. We propose a
method that exploits the correlation between eye gaze and head movements. Both
VFOA and gaze are modeled as latent variables in a Bayesian switching
state-space model. The proposed formulation leads to a tractable learning
procedure and to an efficient algorithm that simultaneously tracks gaze and
visual focus. The method is tested and benchmarked using two publicly available
datasets that contain typical multi-party human-robot and human-human
interactions.Comment: 15 pages, 8 figures, 6 table
3DMODT: Attention-Guided Affinities for Joint Detection & Tracking in 3D Point Clouds
We propose a method for joint detection and tracking of multiple objects in
3D point clouds, a task conventionally treated as a two-step process comprising
object detection followed by data association. Our method embeds both steps
into a single end-to-end trainable network eliminating the dependency on
external object detectors. Our model exploits temporal information employing
multiple frames to detect objects and track them in a single network, thereby
making it a utilitarian formulation for real-world scenarios. Computing
affinity matrix by employing features similarity across consecutive point cloud
scans forms an integral part of visual tracking. We propose an attention-based
refinement module to refine the affinity matrix by suppressing erroneous
correspondences. The module is designed to capture the global context in
affinity matrix by employing self-attention within each affinity matrix and
cross-attention across a pair of affinity matrices. Unlike competing
approaches, our network does not require complex post-processing algorithms,
and processes raw LiDAR frames to directly output tracking results. We
demonstrate the effectiveness of our method on the three tracking benchmarks:
JRDB, Waymo, and KITTI. Experimental evaluations indicate the ability of our
model to generalize well across datasets
Visual Dialogue State Tracking for Question Generation
GuessWhat?! is a visual dialogue task between a guesser and an oracle. The
guesser aims to locate an object supposed by the oracle oneself in an image by
asking a sequence of Yes/No questions. Asking proper questions with the
progress of dialogue is vital for achieving successful final guess. As a
result, the progress of dialogue should be properly represented and tracked.
Previous models for question generation pay less attention on the
representation and tracking of dialogue states, and therefore are prone to
asking low quality questions such as repeated questions. This paper proposes
visual dialogue state tracking (VDST) based method for question generation. A
visual dialogue state is defined as the distribution on objects in the image as
well as representations of objects. Representations of objects are updated with
the change of the distribution on objects. An object-difference based attention
is used to decode new question. The distribution on objects is updated by
comparing the question-answer pair and objects. Experimental results on
GuessWhat?! dataset show that our model significantly outperforms existing
methods and achieves new state-of-the-art performance. It is also noticeable
that our model reduces the rate of repeated questions from more than 50% to
21.9% compared with previous state-of-the-art methods.Comment: 8 pages, 4 figures, Accept-Oral by AAAI-202
- …