4,280 research outputs found
Finding Action Tubes with a Sparse-to-Dense Framework
The task of spatial-temporal action detection has attracted increasing
attention among researchers. Existing dominant methods solve this problem by
relying on short-term information and dense serial-wise detection on each
individual frames or clips. Despite their effectiveness, these methods showed
inadequate use of long-term information and are prone to inefficiency. In this
paper, we propose for the first time, an efficient framework that generates
action tube proposals from video streams with a single forward pass in a
sparse-to-dense manner. There are two key characteristics in this framework:
(1) Both long-term and short-term sampled information are explicitly utilized
in our spatiotemporal network, (2) A new dynamic feature sampling module (DTS)
is designed to effectively approximate the tube output while keeping the system
tractable. We evaluate the efficacy of our model on the UCF101-24, JHMDB-21 and
UCFSports benchmark datasets, achieving promising results that are competitive
to state-of-the-art methods. The proposed sparse-to-dense strategy rendered our
framework about 7.6 times more efficient than the nearest competitor.Comment: 5 figures; AAAI 202
Detect to Track and Track to Detect
Recent approaches for high accuracy detection and tracking of object
categories in video consist of complex multistage solutions that become more
cumbersome each year. In this paper we propose a ConvNet architecture that
jointly performs detection and tracking, solving the task in a simple and
effective way. Our contributions are threefold: (i) we set up a ConvNet
architecture for simultaneous detection and tracking, using a multi-task
objective for frame-based object detection and across-frame track regression;
(ii) we introduce correlation features that represent object co-occurrences
across time to aid the ConvNet during tracking; and (iii) we link the frame
level detections based on our across-frame tracklets to produce high accuracy
detections at the video level. Our ConvNet architecture for spatiotemporal
object detection is evaluated on the large-scale ImageNet VID dataset where it
achieves state-of-the-art results. Our approach provides better single model
performance than the winning method of the last ImageNet challenge while being
conceptually much simpler. Finally, we show that by increasing the temporal
stride we can dramatically increase the tracker speed.Comment: ICCV 2017. Code and models:
https://github.com/feichtenhofer/Detect-Track Results:
https://www.robots.ox.ac.uk/~vgg/research/detect-track
Analyzing Human-Human Interactions: A Survey
Many videos depict people, and it is their interactions that inform us of
their activities, relation to one another and the cultural and social setting.
With advances in human action recognition, researchers have begun to address
the automated recognition of these human-human interactions from video. The
main challenges stem from dealing with the considerable variation in recording
setting, the appearance of the people depicted and the coordinated performance
of their interaction. This survey provides a summary of these challenges and
datasets to address these, followed by an in-depth discussion of relevant
vision-based recognition and detection methods. We focus on recent, promising
work based on deep learning and convolutional neural networks (CNNs). Finally,
we outline directions to overcome the limitations of the current
state-of-the-art to analyze and, eventually, understand social human actions
Optimizing Video Object Detection via a Scale-Time Lattice
High-performance object detection relies on expensive convolutional networks
to compute features, often leading to significant challenges in applications,
e.g. those that require detecting objects from video streams in real time. The
key to this problem is to trade accuracy for efficiency in an effective way,
i.e. reducing the computing cost while maintaining competitive performance. To
seek a good balance, previous efforts usually focus on optimizing the model
architectures. This paper explores an alternative approach, that is, to
reallocate the computation over a scale-time space. The basic idea is to
perform expensive detection sparsely and propagate the results across both
scales and time with substantially cheaper networks, by exploiting the strong
correlations among them. Specifically, we present a unified framework that
integrates detection, temporal propagation, and across-scale refinement on a
Scale-Time Lattice. On this framework, one can explore various strategies to
balance performance and cost. Taking advantage of this flexibility, we further
develop an adaptive scheme with the detector invoked on demand and thus obtain
improved tradeoff. On ImageNet VID dataset, the proposed method can achieve a
competitive mAP 79.6% at 20 fps, or 79.0% at 62 fps as a performance/speed
tradeoff.Comment: Accepted to CVPR 2018. Project page:
http://mmlab.ie.cuhk.edu.hk/projects/ST-Lattice
Grounding robot motion in natural language and visual perception
The current state of the art in military and first responder ground robots involves heavy physical and cognitive burdens on the human operator while taking little to no advantage of the potential autonomy of robotic technology. The robots currently in use are rugged remote-controlled vehicles. Their interaction modalities, usually utilizing a game controller connected to a computer, require a dedicated operator who has limited capacity for other tasks.
I present research which aims to ease these burdens by incorporating multiple modes of robotic sensing into a system which allows humans to interact with robots through a natural-language interface. I conduct this research on a custom-built six-wheeled mobile robot.
First I present a unified framework which supports grounding natural-language semantics in robotic driving. This framework supports learning the meanings of nouns and prepositions from sentential descriptions of paths driven by the robot, as well as using such meanings to both generate a sentential description of a path and perform automated driving of a path specified in natural language. One limitation of this framework is that it requires as input the locations of the (initially nameless) objects in the floor plan.
Next I present a method to automatically detect, localize, and label objects in the robot’s environment using only the robot’s video feed and corresponding odometry. This method produces a map of the robot’s environment in which objects are differentiated by abstract class labels.
Finally, I present work that unifies the previous two approaches. This method detects, localizes, and labels objects, as the previous method does. However, this new method integrates natural-language descriptions to learn actual object names, rather than abstract labels
- …