2,447 research outputs found
Abnormal Event Detection in Videos using Spatiotemporal Autoencoder
We present an efficient method for detecting anomalies in videos. Recent
applications of convolutional neural networks have shown promises of
convolutional layers for object detection and recognition, especially in
images. However, convolutional neural networks are supervised and require
labels as learning signals. We propose a spatiotemporal architecture for
anomaly detection in videos including crowded scenes. Our architecture includes
two main components, one for spatial feature representation, and one for
learning the temporal evolution of the spatial features. Experimental results
on Avenue, Subway and UCSD benchmarks confirm that the detection accuracy of
our method is comparable to state-of-the-art methods at a considerable speed of
up to 140 fps
Action Recognition in Videos: from Motion Capture Labs to the Web
This paper presents a survey of human action recognition approaches based on
visual data recorded from a single video camera. We propose an organizing
framework which puts in evidence the evolution of the area, with techniques
moving from heavily constrained motion capture scenarios towards more
challenging, realistic, "in the wild" videos. The proposed organization is
based on the representation used as input for the recognition task, emphasizing
the hypothesis assumed and thus, the constraints imposed on the type of video
that each technique is able to address. Expliciting the hypothesis and
constraints makes the framework particularly useful to select a method, given
an application. Another advantage of the proposed organization is that it
allows categorizing newest approaches seamlessly with traditional ones, while
providing an insightful perspective of the evolution of the action recognition
task up to now. That perspective is the basis for the discussion in the end of
the paper, where we also present the main open issues in the area.Comment: Preprint submitted to CVIU, survey paper, 46 pages, 2 figures, 4
table
CoMaL Tracking: Tracking Points at the Object Boundaries
Traditional point tracking algorithms such as the KLT use local 2D
information aggregation for feature detection and tracking, due to which their
performance degrades at the object boundaries that separate multiple objects.
Recently, CoMaL Features have been proposed that handle such a case. However,
they proposed a simple tracking framework where the points are re-detected in
each frame and matched. This is inefficient and may also lose many points that
are not re-detected in the next frame. We propose a novel tracking algorithm to
accurately and efficiently track CoMaL points. For this, the level line segment
associated with the CoMaL points is matched to MSER segments in the next frame
using shape-based matching and the matches are further filtered using
texture-based matching. Experiments show improvements over a simple
re-detect-and-match framework as well as KLT in terms of speed/accuracy on
different real-world applications, especially at the object boundaries.Comment: 10 pages, 10 figures, to appear in 1st Joint BMTT-PETS Workshop on
Tracking and Surveillance, CVPR 201
STV-based Video Feature Processing for Action Recognition
In comparison to still image-based processes, video features can provide rich and intuitive information about dynamic events occurred over a period of time, such as human actions, crowd behaviours, and other subject pattern changes. Although substantial progresses have been made in the last decade on image processing and seen its successful applications in face matching and object recognition, video-based event detection still remains one of the most difficult challenges in computer vision research due to its complex continuous or discrete input signals, arbitrary dynamic feature definitions, and the often ambiguous analytical methods. In this paper, a Spatio-Temporal Volume (STV) and region intersection (RI) based 3D shape-matching method has been proposed to facilitate the definition and recognition of human actions recorded in videos. The distinctive characteristics and the performance gain of the devised approach stemmed from a coefficient factor-boosted 3D region intersection and matching mechanism developed in this research. This paper also reported the investigation into techniques for efficient STV data filtering to reduce the amount of voxels (volumetric-pixels) that need to be processed in each operational cycle in the implemented system. The encouraging features and improvements on the operational performance registered in the experiments have been discussed at the end
Social Scene Understanding: End-to-End Multi-Person Action Localization and Collective Activity Recognition
We present a unified framework for understanding human social behaviors in
raw image sequences. Our model jointly detects multiple individuals, infers
their social actions, and estimates the collective actions with a single
feed-forward pass through a neural network. We propose a single architecture
that does not rely on external detection algorithms but rather is trained
end-to-end to generate dense proposal maps that are refined via a novel
inference scheme. The temporal consistency is handled via a person-level
matching Recurrent Neural Network. The complete model takes as input a sequence
of frames and outputs detections along with the estimates of individual actions
and collective activities. We demonstrate state-of-the-art performance of our
algorithm on multiple publicly available benchmarks
Going Deeper into Action Recognition: A Survey
Understanding human actions in visual data is tied to advances in
complementary research areas including object recognition, human dynamics,
domain adaptation and semantic segmentation. Over the last decade, human action
analysis evolved from earlier schemes that are often limited to controlled
environments to nowadays advanced solutions that can learn from millions of
videos and apply to almost all daily activities. Given the broad range of
applications from video surveillance to human-computer interaction, scientific
milestones in action recognition are achieved more rapidly, eventually leading
to the demise of what used to be good in a short time. This motivated us to
provide a comprehensive review of the notable steps taken towards recognizing
human actions. To this end, we start our discussion with the pioneering methods
that use handcrafted representations, and then, navigate into the realm of deep
learning based approaches. We aim to remain objective throughout this survey,
touching upon encouraging improvements as well as inevitable fallbacks, in the
hope of raising fresh questions and motivating new research directions for the
reader
Análise de multidões usando coerência de vizinhança local
Large numbers of crowd analysis methods using computer vision have been developed in the past years. This dissertation presents an approach to explore characteristics inherent to human crowds – proxemics, and neighborhood relationship – with the purpose of extracting crowd features and using them for crowd flow estimation and anomaly detection and localization. Given the optical flow produced by any method, the proposed approach compares the similarity of each flow vector and its neighborhood using the Mahalanobis distance, which can be obtained in an efficient manner using integral images. This similarity value is then used either to filter the original optical flow or to extract features that describe the crowd behavior in different resolutions, depending on the radius of the personal space selected in the analysis. To show that the extracted features are indeed relevant, we tested several classifiers in the context of abnormality detection. More precisely, we used Recurrent Neural Networks, Dense Neural Networks, Support Vector Machines, Random Forest and Extremely Random Trees. The two developed approaches (crowd flow estimation and abnormality detection) were tested on publicly available datasets involving human crowded scenarios and compared with state-of-the-art methods.Métodos para análise de ambientes de multidões são amplamente desenvolvidos na área de visão computacional. Esta tese apresenta uma abordagem para explorar características inerentes às multidões humanas - comunicação proxêmica e relações de vizinhança - para extrair características de multidões e usá-las para estimativa de fluxo de multidões e detecção e localização de anomalias. Dado o fluxo óptico produzido por qualquer método, a abordagem proposta compara a similaridade de cada vetor de fluxo e sua vizinhança usando a distância de Mahalanobis, que pode ser obtida de maneira eficiente usando imagens integrais. Esse valor de similaridade é então utilizado para filtrar o fluxo óptico original ou para extrair informações que descrevem o comportamento da multidão em diferentes resoluções, dependendo do raio do espaço pessoal selecionado na análise. Para mostrar que as características são realmente relevantes, testamos vários classificadores no contexto da detecção de anormalidades. Mais precisamente, usamos redes neurais recorrentes, redes neurais densas, máquinas de vetores de suporte, floresta aleatória e árvores extremamente aleatórias. As duas abordagens desenvolvidas (estimativa do fluxo de multidões e detecção de anormalidades) foram testadas em conjuntos de dados públicos, envolvendo cenários de multidões humanas e comparados com métodos estado-da-arte
- …