6 research outputs found

    Multi-scale Spatial-temporal Interaction Network for Video Anomaly Detection

    Full text link
    Video anomaly detection (VAD) is an essential yet challenge task in signal processing. Since certain anomalies cannot be detected by analyzing temporal or spatial information alone, the interaction between two types of information is considered crucial for VAD. However, current dual-stream architectures either limit interaction between the two types of information to the bottleneck of autoencoder or incorporate background pixels irrelevant to anomalies into the interaction. To this end, we propose a multi-scale spatial-temporal interaction network (MSTI-Net) for VAD. First, to pay particular attention to objects and reconcile the significant semantic differences between the two information, we propose an attention-based spatial-temporal fusion module (ASTM) as a substitute for the conventional direct fusion. Furthermore, we inject multi ASTM-based connections between the appearance and motion pathways of a dual stream network to facilitate spatial-temporal interaction at all possible scales. Finally, the regular information learned from multiple scales is recorded in memory to enhance the differentiation between anomalies and normal events during the testing phase. Solid experimental results on three standard datasets validate the effectiveness of our approach, which achieve AUCs of 96.8% for UCSD Ped2, 87.6% for CUHK Avenue, and 73.9% for the ShanghaiTech dataset

    Cloze Test Helps: Effective Video Anomaly Detection via Learning to Complete Video Events

    Full text link
    As a vital topic in media content interpretation, video anomaly detection (VAD) has made fruitful progress via deep neural network (DNN). However, existing methods usually follow a reconstruction or frame prediction routine. They suffer from two gaps: (1) They cannot localize video activities in a both precise and comprehensive manner. (2) They lack sufficient abilities to utilize high-level semantics and temporal context information. Inspired by frequently-used cloze test in language study, we propose a brand-new VAD solution named Video Event Completion (VEC) to bridge gaps above: First, we propose a novel pipeline to achieve both precise and comprehensive enclosure of video activities. Appearance and motion are exploited as mutually complimentary cues to localize regions of interest (RoIs). A normalized spatio-temporal cube (STC) is built from each RoI as a video event, which lays the foundation of VEC and serves as a basic processing unit. Second, we encourage DNN to capture high-level semantics by solving a visual cloze test. To build such a visual cloze test, a certain patch of STC is erased to yield an incomplete event (IE). The DNN learns to restore the original video event from the IE by inferring the missing patch. Third, to incorporate richer motion dynamics, another DNN is trained to infer erased patches' optical flow. Finally, two ensemble strategies using different types of IE and modalities are proposed to boost VAD performance, so as to fully exploit the temporal context and modality information for VAD. VEC can consistently outperform state-of-the-art methods by a notable margin (typically 1.5%-5% AUROC) on commonly-used VAD benchmarks. Our codes and results can be verified at github.com/yuguangnudt/VEC_VAD.Comment: To be published as an oral paper in Proceedings of the 28th ACM International Conference on Multimedia (ACM MM '20). 9 pages, 7 figure

    SVD-GAN for real-time unsupervised video anomaly detection

    Get PDF
    Real-time unsupervised anomaly detection from videos is challenging due to the uncertainty in occurrence and definition of abnormal events. To overcome this ambiguity, an unsupervised adversarial learning model is proposed to detect such unusual events. The proposed end-to-end system is based on a Generative Adversarial Network (GAN) architecture with spatiotemporal feature learning and a new Singular Value Decomposition (SVD) loss function for robust reconstruction and video anomaly detection. The loss employs efficient low-rank approximations of the matrices involved to drive the convergence of the model. During training, the model strives to learn the relevant normal data distribution. Anomalies are then detected as frames whose reconstruction error, based on such distribution, shows a significant deviation. The model is efficient and lightweight due to our adoption of depth-wise separable convolution. The complete system is validated upon several benchmark datasets and proven to be robust for complex video anomaly detection, in terms of both AUC and Equal Error Rate (EER)

    A Novel Unsupervised Video Anomaly Detection Framework Based on Optical Flow Reconstruction and Erased Frame Prediction

    Get PDF
    Reconstruction-based and prediction-based approaches are widely used for video anomaly detection (VAD) in smart city surveillance applications. However, neither of these approaches can effectively utilize the rich contextual information that exists in videos, which makes it difficult to accurately perceive anomalous activities. In this paper, we exploit the idea of a training model based on the “Cloze Test” strategy in natural language processing (NLP) and introduce a novel unsupervised learning framework to encode both motion and appearance information at an object level. Specifically, to store the normal modes of video activity reconstructions, we first design an optical stream memory network with skip connections. Secondly, we build a space–time cube (STC) for use as the basic processing unit of the model and erase a patch in the STC to form the frame to be reconstructed. This enables a so-called ”incomplete event (IE)” to be completed. On this basis, a conditional autoencoder is utilized to capture the high correspondence between optical flow and STC. The model predicts erased patches in IEs based on the context of the front and back frames. Finally, we employ a generating adversarial network (GAN)-based training method to improve the performance of VAD. By distinguishing the predicted erased optical flow and erased video frame, the anomaly detection results are shown to be more reliable with our proposed method which can help reconstruct the original video in IE. Comparative experiments conducted on the benchmark UCSD Ped2, CUHK Avenue, and ShanghaiTech datasets demonstrate AUROC scores reaching 97.7%, 89.7%, and 75.8%, respectively

    Novel statistical modeling methods for traffic video analysis

    Get PDF
    Video analysis is an active and rapidly expanding research area in computer vision and artificial intelligence due to its broad applications in modern society. Many methods have been proposed to analyze the videos, but many challenging factors remain untackled. In this dissertation, four statistical modeling methods are proposed to address some challenging traffic video analysis problems under adverse illumination and weather conditions. First, a new foreground detection method is presented to detect the foreground objects in videos. A novel Global Foreground Modeling (GFM) method, which estimates a global probability density function for the foreground and applies the Bayes decision rule for model selection, is proposed to model the foreground globally. A Local Background Modeling (LBM) method is applied by choosing the most significant Gaussian density in the Gaussian mixture model to model the background locally for each pixel. In addition, to mitigate the correlation effects of the Red, Green, and Blue (RGB) color space on the independence assumption among the color component images, some other color spaces are investigated for feature extraction. To further enhance the discriminatory power of the input feature vector, the horizontal and vertical Haar wavelet features and the temporal information are integrated into the color features to define a new 12-dimensional feature vector space. Finally, the Bayes classifier is applied for the classification of the foreground and the background pixels. Second, a novel moving cast shadow detection method is presented to detect and remove the cast shadows from the foreground. Specifically, a set of new chromatic criteria is presented to detect the candidate shadow pixels in the Hue, Saturation, and Value (HSV) color space. A new shadow region detection method is then proposed to cluster the candidate shadow pixels into shadow regions. A statistical shadow model, which uses a single Gaussian distribution to model the shadow class, is presented to classify shadow pixels. Additionally, an aggregated shadow detection strategy is presented to integrate the shadow detection results and remove the shadows from the foreground. Third, a novel statistical modeling method is presented to solve the automated road recognition problem for the Region of Interest (RoI) detection in traffic video analysis. A temporal feature guided statistical modeling method is proposed for road modeling. Additionally, a model pruning strategy is applied to estimate the road model. Then, a new road region detection method is presented to detect the road regions in the video. The method applies discriminant functions to classify each pixel in the estimated background image into a road class or a non-road class, respectively. The proposed method provides an intra-cognitive communication mode between the RoI selection and video analysis systems. Fourth, a novel anomalous driving detection method in videos, which can detect unsafe anomalous driving behaviors is introduced. A new Multiple Object Tracking (MOT) method is proposed to extract the velocities and trajectories of moving foreground objects in video. The new MOT method is a motion-based tracking method, which integrates the temporal and spatial features. Then, a novel Gaussian Local Velocity (GLV) modeling method is presented to model the normal moving behavior in traffic videos. The GLV model is built for every location in the video frame, and updated online. Finally, a discriminant function is proposed to detect anomalous driving behaviors. To assess the feasibility of the proposed statistical modeling methods, several popular public video datasets, as well as the real traffic videos from the New Jersey Department of Transportation (NJDOT) are applied. The experimental results show the effectiveness and feasibility of the proposed methods
    corecore