10,305 research outputs found

    Intelligent video anomaly detection and classification using faster RCNN with deep reinforcement learning model

    Get PDF
    Recently, intelligent video surveillance applications have become essential in public security by the use of computer vision technologies to investigate and understand long video streams. Anomaly detection and classification are considered a major element of intelligent video surveillance. The aim of anomaly detection is to automatically determine the existence of abnormalities in a short time period. Deep reinforcement learning (DRL) techniques can be employed for anomaly detection, which integrates the concepts of reinforcement learning and deep learning enabling the artificial agents in learning the knowledge and experience from actual data directly. With this motivation, this paper presents an Intelligent Video Anomaly Detection and Classification using Faster RCNN with Deep Reinforcement Learning Model, called IVADC-FDRL model. The presented IVADC-FDRL model operates on two major stages namely anomaly detection and classification. Firstly, Faster RCNN model is applied as an object detector with Residual Network as a baseline model, which detects the anomalies as objects. Besides, deep Q-learning (DQL) based DRL model is employed for the classification of detected anomalies. In order to validate the effective anomaly detection and classification performance of the IVADC-FDRL model, an extensive set of experimentations were carried out on the benchmark UCSD anomaly dataset. The experimental results showcased the better performance of the IVADC-FDRL model over the other compared methods with the maximum accuracy of 98.50% and 94.80% on the applied Test004 and Test007 dataset respectively

    CHAD: Charlotte Anomaly Dataset

    Full text link
    In recent years, we have seen a significant interest in data-driven deep learning approaches for video anomaly detection, where an algorithm must determine if specific frames of a video contain abnormal behaviors. However, video anomaly detection is particularly context-specific, and the availability of representative datasets heavily limits real-world accuracy. Additionally, the metrics currently reported by most state-of-the-art methods often do not reflect how well the model will perform in real-world scenarios. In this article, we present the Charlotte Anomaly Dataset (CHAD). CHAD is a high-resolution, multi-camera anomaly dataset in a commercial parking lot setting. In addition to frame-level anomaly labels, CHAD is the first anomaly dataset to include bounding box, identity, and pose annotations for each actor. This is especially beneficial for skeleton-based anomaly detection, which is useful for its lower computational demand in real-world settings. CHAD is also the first anomaly dataset to contain multiple views of the same scene. With four camera views and over 1.15 million frames, CHAD is the largest fully annotated anomaly detection dataset including person annotations, collected from continuous video streams from stationary cameras for smart video surveillance applications. To demonstrate the efficacy of CHAD for training and evaluation, we benchmark two state-of-the-art skeleton-based anomaly detection algorithms on CHAD and provide comprehensive analysis, including both quantitative results and qualitative examination. The dataset is available at https://github.com/TeCSAR-UNCC/CHAD

    An overview of deep learning based methods for unsupervised and semi-supervised anomaly detection in videos

    Full text link
    Videos represent the primary source of information for surveillance applications and are available in large amounts but in most cases contain little or no annotation for supervised learning. This article reviews the state-of-the-art deep learning based methods for video anomaly detection and categorizes them based on the type of model and criteria of detection. We also perform simple studies to understand the different approaches and provide the criteria of evaluation for spatio-temporal anomaly detection.Comment: 15 pages, double colum

    Spatio-temporal Texture Modelling for Real-time Crowd Anomaly Detection

    Get PDF
    With the rapidly increasing demands from surveillance and security industries, crowd behaviour analysis has become one of the hotly pursued video event detection frontiers within the computer vision arena in recent years. This research has investigated innovative crowd behaviour detection approaches based on statistical crowd features extracted from video footages. In this paper, a new crowd video anomaly detection algorithm has been developed based on analysing the extracted spatio-temporal textures. The algorithm has been designed for real-time applications by deploying low-level statistical features and alleviating complicated machine learning and recognition processes. In the experiments, the system has been proven a valid solution for detecting anomaly behaviours without strong assumptions on the nature of crowds, for example, subjects and density. The developed prototype shows improved adaptability and efficiency against chosen benchmark systems

    Crowd anomaly detection for automated video surveillance

    Get PDF
    Video-based crowd behaviour detection aims at tackling challenging problems such as automating and identifying changing crowd behaviours under complex real life situations. In this paper, real-time crowd anomaly detection algorithms have been investigated. Based on the spatio-temporal video volume concept, an innovative spatio-temporal texture model has been proposed in this research for its rich crowd pattern characteristics. Through extracting and integrating those crowd textures from surveillance recordings, a redundancy wavelet transformation-based feature space can be deployed for behavioural template matching. Experiment shows that the abnormality appearing in crowd scenes can be identified in a real-time fashion by the devised method. This new approach is envisaged to facilitate a wide spectrum of crowd analysis applications through automating current Closed-Circuit Television (CCTV)-based surveillance systems

    Open-Vocabulary Video Anomaly Detection

    Full text link
    Video anomaly detection (VAD) with weak supervision has achieved remarkable performance in utilizing video-level labels to discriminate whether a video frame is normal or abnormal. However, current approaches are inherently limited to a closed-set setting and may struggle in open-world applications where there can be anomaly categories in the test data unseen during training. A few recent studies attempt to tackle a more realistic setting, open-set VAD, which aims to detect unseen anomalies given seen anomalies and normal videos. However, such a setting focuses on predicting frame anomaly scores, having no ability to recognize the specific categories of anomalies, despite the fact that this ability is essential for building more informed video surveillance systems. This paper takes a step further and explores open-vocabulary video anomaly detection (OVVAD), in which we aim to leverage pre-trained large models to detect and categorize seen and unseen anomalies. To this end, we propose a model that decouples OVVAD into two mutually complementary tasks -- class-agnostic detection and class-specific classification -- and jointly optimizes both tasks. Particularly, we devise a semantic knowledge injection module to introduce semantic knowledge from large language models for the detection task, and design a novel anomaly synthesis module to generate pseudo unseen anomaly videos with the help of large vision generation models for the classification task. These semantic knowledge and synthesis anomalies substantially extend our model's capability in detecting and categorizing a variety of seen and unseen anomalies. Extensive experiments on three widely-used benchmarks demonstrate our model achieves state-of-the-art performance on OVVAD task.Comment: Submitte
    • …
    corecore