5,117 research outputs found
CDC: Convolutional-De-Convolutional Networks for Precise Temporal Action Localization in Untrimmed Videos
Temporal action localization is an important yet challenging problem. Given a
long, untrimmed video consisting of multiple action instances and complex
background contents, we need not only to recognize their action categories, but
also to localize the start time and end time of each instance. Many
state-of-the-art systems use segment-level classifiers to select and rank
proposal segments of pre-determined boundaries. However, a desirable model
should move beyond segment-level and make dense predictions at a fine
granularity in time to determine precise temporal boundaries. To this end, we
design a novel Convolutional-De-Convolutional (CDC) network that places CDC
filters on top of 3D ConvNets, which have been shown to be effective for
abstracting action semantics but reduce the temporal length of the input data.
The proposed CDC filter performs the required temporal upsampling and spatial
downsampling operations simultaneously to predict actions at the frame-level
granularity. It is unique in jointly modeling action semantics in space-time
and fine-grained temporal dynamics. We train the CDC network in an end-to-end
manner efficiently. Our model not only achieves superior performance in
detecting actions in every frame, but also significantly boosts the precision
of localizing temporal boundaries. Finally, the CDC network demonstrates a very
high efficiency with the ability to process 500 frames per second on a single
GPU server. We will update the camera-ready version and publish the source
codes online soon.Comment: IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
201
Learning Deep Representations of Appearance and Motion for Anomalous Event Detection
We present a novel unsupervised deep learning framework for anomalous event
detection in complex video scenes. While most existing works merely use
hand-crafted appearance and motion features, we propose Appearance and Motion
DeepNet (AMDN) which utilizes deep neural networks to automatically learn
feature representations. To exploit the complementary information of both
appearance and motion patterns, we introduce a novel double fusion framework,
combining both the benefits of traditional early fusion and late fusion
strategies. Specifically, stacked denoising autoencoders are proposed to
separately learn both appearance and motion features as well as a joint
representation (early fusion). Based on the learned representations, multiple
one-class SVM models are used to predict the anomaly scores of each input,
which are then integrated with a late fusion strategy for final anomaly
detection. We evaluate the proposed method on two publicly available video
surveillance datasets, showing competitive performance with respect to state of
the art approaches.Comment: Oral paper in BMVC 201
Road User Detection in Videos
Successive frames of a video are highly redundant, and the most popular
object detection methods do not take advantage of this fact. Using multiple
consecutive frames can improve detection of small objects or difficult examples
and can improve speed and detection consistency in a video sequence, for
instance by interpolating features between frames. In this work, a novel
approach is introduced to perform online video object detection using two
consecutive frames of video sequences involving road users. Two new models,
RetinaNet-Double and RetinaNet-Flow, are proposed, based respectively on the
concatenation of a target frame with a preceding frame, and the concatenation
of the optical flow with the target frame. The models are trained and evaluated
on three public datasets. Experiments show that using a preceding frame
improves performance over single frame detectors, but using explicit optical
flow usually does not
Road User Detection in Videos
Successive frames of a video are highly redundant, and the most popular
object detection methods do not take advantage of this fact. Using multiple
consecutive frames can improve detection of small objects or difficult examples
and can improve speed and detection consistency in a video sequence, for
instance by interpolating features between frames. In this work, a novel
approach is introduced to perform online video object detection using two
consecutive frames of video sequences involving road users. Two new models,
RetinaNet-Double and RetinaNet-Flow, are proposed, based respectively on the
concatenation of a target frame with a preceding frame, and the concatenation
of the optical flow with the target frame. The models are trained and evaluated
on three public datasets. Experiments show that using a preceding frame
improves performance over single frame detectors, but using explicit optical
flow usually does not
Real-time human ambulation, activity, and physiological monitoring:taxonomy of issues, techniques, applications, challenges and limitations
Automated methods of real-time, unobtrusive, human ambulation, activity, and wellness monitoring and data analysis using various algorithmic techniques have been subjects of intense research. The general aim is to devise effective means of addressing the demands of assisted living, rehabilitation, and clinical observation and assessment through sensor-based monitoring. The research studies have resulted in a large amount of literature. This paper presents a holistic articulation of the research studies and offers comprehensive insights along four main axes: distribution of existing studies; monitoring device framework and sensor types; data collection, processing and analysis; and applications, limitations and challenges. The aim is to present a systematic and most complete study of literature in the area in order to identify research gaps and prioritize future research directions
- …