704 research outputs found

    Video anomaly detection and localization by local motion based joint video representation and OCELM

    Get PDF
    Nowadays, human-based video analysis becomes increasingly exhausting due to the ubiquitous use of surveillance cameras and explosive growth of video data. This paper proposes a novel approach to detect and localize video anomalies automatically. For video feature extraction, video volumes are jointly represented by two novel local motion based video descriptors, SL-HOF and ULGP-OF. SL-HOF descriptor captures the spatial distribution information of 3D local regions’ motion in the spatio-temporal cuboid extracted from video, which can implicitly reflect the structural information of foreground and depict foreground motion more precisely than the normal HOF descriptor. To locate the video foreground more accurately, we propose a new Robust PCA based foreground localization scheme. ULGP-OF descriptor, which seamlessly combines the classic 2D texture descriptor LGP and optical flow, is proposed to describe the motion statistics of local region texture in the areas located by the foreground localization scheme. Both SL-HOF and ULGP-OF are shown to be more discriminative than existing video descriptors in anomaly detection. To model features of normal video events, we introduce the newly-emergent one-class Extreme Learning Machine (OCELM) as the data description algorithm. With a tremendous reduction in training time, OCELM can yield comparable or better performance than existing algorithms like the classic OCSVM, which makes our approach easier for model updating and more applicable to fast learning from the rapidly generated surveillance data. The proposed approach is tested on UCSD ped1, ped2 and UMN datasets, and experimental results show that our approach can achieve state-of-the-art results in both video anomaly detection and localization task.This work was supported by the National Natural Science Foundation of China (Project nos. 60970034, 61170287, 61232016)

    Video anomaly detection with compact feature sets for online performance

    Get PDF
    Over the past decade, video anomaly detection has been explored with remarkable results. However, research on methodologies suitable for online performance is still very limited. In this paper, we present an online framework for video anomaly detection. The key aspect of our framework is a compact set of highly descriptive features, which is extracted from a novel cell structure that helps to define support regions in a coarse-to-fine fashion. Based on the scene's activity, only a limited number of support regions are processed, thus limiting the size of the feature set. Specifically, we use foreground occupancy and optical flow features. The framework uses an inference mechanism that evaluates the compact feature set via Gaussian Mixture Models, Markov Chains, and Bag-of-Words in order to detect abnormal events. Our framework also considers the joint response of the models in the local spatio-temporal neighborhood to increase detection accuracy. We test our framework on popular existing data sets and on a new data set comprising a wide variety of realistic videos captured by surveillance cameras. This particular data set includes surveillance videos depicting criminal activities, car accidents, and other dangerous situations. Evaluation results show that our framework outperforms other online methods and attains a very competitive detection performance compared with state-of-the-art non-online methods

    CENTRIST3D : um descritor espaço-temporal para detecção de anomalias em vídeos de multidões

    Get PDF
    Orientador: Hélio PedriniDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O campo de estudo da detecção de anomalias em multidões possui uma vasta gama de aplicações, podendo-se destacar o monitoramento e vigilância de áreas de interesse, tais como aeroportos, bancos, parques, estádios e estações de trens, como uma das mais importantes. Em geral, sistemas de vigilância requerem prossionais qualicados para assistir longas gravações à procura de alguma anomalia, o que demanda alta concentração e dedicação. Essa abordagem tende a ser ineciente, pois os seres humanos estão sujeitos a falhas sob condições de fadiga e repetição devido aos seus próprios limites quanto à capacidade de observação e seu desempenho está diretamente ligado a fatores físicos e psicológicos, os quais podem impactar negativamente na qualidade de reconhecimento. Multidões tendem a se comportar de maneira complexa, possivelmente mudando de orientação e velocidade rapidamente, bem como devido à oclusão parcial ou total. Consequentemente, técnicas baseadas em rastreamento de pedestres ou que dependam de segmentação de fundo geralmente apresentam maiores taxas de erros. O conceito de anomalia é subjetivo e está sujeito a diferentes interpretações, dependendo do contexto da aplicação. Neste trabalho, duas contribuições são apresentadas. Inicialmente, avaliamos a ecácia do descritor CENsus TRansform hISTogram (CENTRIST), originalmente utilizado para categorização de cenas, no contexto de detecção de anomalias em multidões. Em seguida, propusemos o CENTRIST3D, uma versão modicada do CENTRIST que se utiliza de informações espaço-temporais para melhorar a discriminação dos eventos anômalos. Nosso método cria histogramas de características espaço-temporais de quadros de vídeos sucessivos, os quais foram divididos hierarquicamente utilizando um algoritmo modicado da correspondência em pirâmide espacial. Os resultados foram validados em três bases de dados públicas: University of California San Diego (UCSD) Anomaly Detection Dataset, Violent Flows Dataset e University of Minesota (UMN) Dataset. Comparado com outros trabalhos da literatura, CENTRIST3D obteve resultados satisfatórios nas bases Violent Flows e UMN, mas um desempenho abaixo do esperado na base UCSD, indicando que nosso método é mais adequado para cenas com mudanças abruptas em movimento e textura. Por m, mostramos que há evidências de que o CENTRIST3D é um descritor eciente de ser computado, sendo facilmente paralelizável e obtendo uma taxa de quadros por segundo suciente para ser utilizado em aplicações de tempo realAbstract: Crowd abnormality detection is a eld of study with a wide range of applications, where surveillance of interest areas, such as airports, banks, parks, stadiums and subways, is one of the most important purposes. In general, surveillance systems require well-trained personnel to watch video footages in order to search for abnormal events. Moreover, they usually are dependent on human operators, who are susceptible to failure under stressful and repetitive conditions. This tends to be an ineective approach since humans have their own natural limits of observation and their performance is tightly related to their physical and mental state, which might aect the quality of surveillance. Crowds tend to be complex, subject to subtle changes in motion and to partial or total occlusion. Consequently, approaches based on individual pedestrian tracking and background segmentation may suer in quality due to the aforementioned problems. Anomaly itself is a subjective concept, since it depends on the context of the application. Two main contributions are presented in this work. We rst evaluate the eectiveness of the CENsus TRansform hISTogram (CENTRIST) descriptor, initially designed for scene categorization, in crowd abnormality detection. Then, we propose the CENTRIST3D descriptor, a spatio-temporal variation of CENTRIST. Our method creates a histogram of spatiotemporal features from successive frames by extracting histograms of Volumetric Census Transform from a spatial representation using a modied Spatial Pyramid Matching algorithm. Additionally, we test both descriptors in three public data collections: UCSD Anomaly Detection Dataset, Violent Flows Dataset, and UMN Datasets. Compared to other works of the literature, CENTRIST3D achieved satisfactory accuracy rates on both Violent Flows and UMN Datasets, but poor performance on the UCSD Dataset, indicating that our method is more suitable to scenes with fast changes in motion and texture. Finally, we provide evidence that CENTRIST3D is an ecient descriptor to be computed, since it requires little computational time, is easily parallelizable and achieves suitable frame-per-second rates to be used in real-time applicationsMestradoCiência da ComputaçãoMestre em Ciência da Computação1406874159166/2015-2CAPESCNP

    Real-time Anomaly Detection and Localization in Crowded Scenes

    Get PDF
    In this paper, we propose a method for real-time anomaly detection and localization in crowded scenes. Each video is defined as a set of non-overlapping cubic patches, and is described using two local and global descriptors. These descriptors capture the video properties from different aspects. By incorporating simple and cost-effective Gaussian classifiers, we can distinguish normal activities and anomalies in videos. The local and global features are based on structure similarity between adjacent patches and the features learned in an unsupervised way, using a sparse autoencoder. Experimental results show that our algorithm is comparable to a state-of-the-art procedure on UCSD ped2 and UMN benchmarks, but even more time-efficient. The experiments confirm that our system can reliably detect and localize anomalies as soon as they happen in a video

    Deep Learning for Crowd Anomaly Detection

    Get PDF
    Today, public areas across the globe are monitored by an increasing amount of surveillance cameras. This widespread usage has presented an ever-growing volume of data that cannot realistically be examined in real-time. Therefore, efforts to understand crowd dynamics have brought light to automatic systems for the detection of anomalies in crowds. This thesis explores the methods used across literature for this purpose, with a focus on those fusing dense optical flow in a feature extraction stage to the crowd anomaly detection problem. To this extent, five different deep learning architectures are trained using optical flow maps estimated by three deep learning-based techniques. More specifically, a 2D convolutional network, a 3D convolutional network, and LSTM-based convolutional recurrent network, a pre-trained variant of the latter, and a ConvLSTM-based autoencoder is trained using both regular frames and optical flow maps estimated by LiteFlowNet3, RAFT, and GMA on the UCSD Pedestrian 1 dataset. The experimental results have shown that while prone to overfitting, the use of optical flow maps may improve the performance of supervised spatio-temporal architectures
    corecore