379 research outputs found

    Online real-time crowd behavior detection in video sequences

    Get PDF
    Automatically detecting events in crowded scenes is a challenging task in Computer Vision. A number of offline approaches have been proposed for solving the problem of crowd behavior detection, however the offline assumption limits their application in real-world video surveillance systems. In this paper, we propose an online and real-time method for detecting events in crowded video sequences. The proposed approach is based on the combination of visual feature extraction and image segmentation and it works without the need of a training phase. A quantitative experimental evaluation has been carried out on multiple publicly available video sequences, containing data from various crowd scenarios and different types of events, to demonstrate the effectiveness of the approach

    Learning Deep Representations of Appearance and Motion for Anomalous Event Detection

    Full text link
    We present a novel unsupervised deep learning framework for anomalous event detection in complex video scenes. While most existing works merely use hand-crafted appearance and motion features, we propose Appearance and Motion DeepNet (AMDN) which utilizes deep neural networks to automatically learn feature representations. To exploit the complementary information of both appearance and motion patterns, we introduce a novel double fusion framework, combining both the benefits of traditional early fusion and late fusion strategies. Specifically, stacked denoising autoencoders are proposed to separately learn both appearance and motion features as well as a joint representation (early fusion). Based on the learned representations, multiple one-class SVM models are used to predict the anomaly scores of each input, which are then integrated with a late fusion strategy for final anomaly detection. We evaluate the proposed method on two publicly available video surveillance datasets, showing competitive performance with respect to state of the art approaches.Comment: Oral paper in BMVC 201

    Abnormal Event Detection in Videos using Spatiotemporal Autoencoder

    Full text link
    We present an efficient method for detecting anomalies in videos. Recent applications of convolutional neural networks have shown promises of convolutional layers for object detection and recognition, especially in images. However, convolutional neural networks are supervised and require labels as learning signals. We propose a spatiotemporal architecture for anomaly detection in videos including crowded scenes. Our architecture includes two main components, one for spatial feature representation, and one for learning the temporal evolution of the spatial features. Experimental results on Avenue, Subway and UCSD benchmarks confirm that the detection accuracy of our method is comparable to state-of-the-art methods at a considerable speed of up to 140 fps

    Video anomaly detection and localization by local motion based joint video representation and OCELM

    Get PDF
    Nowadays, human-based video analysis becomes increasingly exhausting due to the ubiquitous use of surveillance cameras and explosive growth of video data. This paper proposes a novel approach to detect and localize video anomalies automatically. For video feature extraction, video volumes are jointly represented by two novel local motion based video descriptors, SL-HOF and ULGP-OF. SL-HOF descriptor captures the spatial distribution information of 3D local regions’ motion in the spatio-temporal cuboid extracted from video, which can implicitly reflect the structural information of foreground and depict foreground motion more precisely than the normal HOF descriptor. To locate the video foreground more accurately, we propose a new Robust PCA based foreground localization scheme. ULGP-OF descriptor, which seamlessly combines the classic 2D texture descriptor LGP and optical flow, is proposed to describe the motion statistics of local region texture in the areas located by the foreground localization scheme. Both SL-HOF and ULGP-OF are shown to be more discriminative than existing video descriptors in anomaly detection. To model features of normal video events, we introduce the newly-emergent one-class Extreme Learning Machine (OCELM) as the data description algorithm. With a tremendous reduction in training time, OCELM can yield comparable or better performance than existing algorithms like the classic OCSVM, which makes our approach easier for model updating and more applicable to fast learning from the rapidly generated surveillance data. The proposed approach is tested on UCSD ped1, ped2 and UMN datasets, and experimental results show that our approach can achieve state-of-the-art results in both video anomaly detection and localization task.This work was supported by the National Natural Science Foundation of China (Project nos. 60970034, 61170287, 61232016)

    ANOMALY DETECTION OF EVENTS IN CROWDED ENVIRONMENT AND STUDY OF VARIOUS BACKGROUND SUBTRACTION METHODS

    Get PDF
    Anomalous behavior detection and localization in videos of the crowded area that is specific from a dominant pattern are obtained. Appearance and motion information are taken into account to robustly identify different kinds of an anomaly considering a wide range of scenes. Our concept based on a histogram of oriented gradients and Markov random field easily captures varying dynamic of the crowded environment.Histogram of oriented gradients along with well-known Markov random field will effectively recognize and characterizes each frame of each scene. Anomaly detection using artificial neural network consist both appearance and motion features which extract within spatio temporal domain of moving pixels that ensures robustness to local noise and thus increases accuracy in detection of a local anomaly with low computational cost.To extract a region of interest we have to subtract background. Background subtraction is done by various methods like Weighted moving mean, Gaussian mixture model, Kernel density estimation.

    Video anomaly detection with compact feature sets for online performance

    Get PDF
    Over the past decade, video anomaly detection has been explored with remarkable results. However, research on methodologies suitable for online performance is still very limited. In this paper, we present an online framework for video anomaly detection. The key aspect of our framework is a compact set of highly descriptive features, which is extracted from a novel cell structure that helps to define support regions in a coarse-to-fine fashion. Based on the scene's activity, only a limited number of support regions are processed, thus limiting the size of the feature set. Specifically, we use foreground occupancy and optical flow features. The framework uses an inference mechanism that evaluates the compact feature set via Gaussian Mixture Models, Markov Chains, and Bag-of-Words in order to detect abnormal events. Our framework also considers the joint response of the models in the local spatio-temporal neighborhood to increase detection accuracy. We test our framework on popular existing data sets and on a new data set comprising a wide variety of realistic videos captured by surveillance cameras. This particular data set includes surveillance videos depicting criminal activities, car accidents, and other dangerous situations. Evaluation results show that our framework outperforms other online methods and attains a very competitive detection performance compared with state-of-the-art non-online methods

    Online video-based abnormal detection using highly motion techniques and statistical measures

    Get PDF
    At the essence of video surveillance, there are abnormal detection approaches, which have been proven to be substantially effective in detecting abnormal incidents without prior knowledge about these incidents. Based on the state-of-the-art research, it is evident that there is a trade-off between frame processing time and detection accuracy in abnormal detection approaches. Therefore, the primary challenge is to balance this trade-off suitably by utilizing few, but very descriptive features to fulfill online performance while maintaining a high accuracy rate. In this study, we propose a new framework, which achieves the balancing between detection accuracy and video processing time by employing two efficient motion techniques, specifically, foreground and optical flow energy. Moreover, we use different statistical analysis measures of motion features to get robust inference method to distinguish abnormal behavior incident from normal ones. The performance of this framework has been extensively evaluated in terms of the detection accuracy, the area under the curve (AUC) and frame processing time. Simulation results and comparisons with ten relevant online and non-online frameworks demonstrate that our framework efficiently achieves superior performance to those frameworks, in which it presents high values for he accuracy while attaining simultaneously low values for the processing time
    • …
    corecore