3 research outputs found
Video anomaly detection and localization by local motion based joint video representation and OCELM
Nowadays, human-based video analysis becomes increasingly exhausting due to the ubiquitous use of surveillance cameras and explosive growth of video data. This paper proposes a novel approach to detect and localize video anomalies automatically. For video feature extraction, video volumes are jointly represented by two novel local motion based video descriptors, SL-HOF and ULGP-OF. SL-HOF descriptor captures the spatial distribution information of 3D local regions’ motion in the spatio-temporal cuboid extracted from video, which can implicitly reflect the structural information of foreground and depict foreground motion more precisely than the normal HOF descriptor. To locate the video foreground more accurately, we propose a new Robust PCA based foreground localization scheme. ULGP-OF descriptor, which seamlessly combines the classic 2D texture descriptor LGP and optical flow, is proposed to describe the motion statistics of local region texture in the areas located by the foreground localization scheme. Both SL-HOF and ULGP-OF are shown to be more discriminative than existing video descriptors in anomaly detection. To model features of normal video events, we introduce the newly-emergent one-class Extreme Learning Machine (OCELM) as the data description algorithm. With a tremendous reduction in training time, OCELM can yield comparable or better performance than existing algorithms like the classic OCSVM, which makes our approach easier for model updating and more applicable to fast learning from the rapidly generated surveillance data. The proposed approach is tested on UCSD ped1, ped2 and UMN datasets, and experimental results show that our approach can achieve state-of-the-art results in both video anomaly detection and localization task.This work was supported by the National Natural Science Foundation of China (Project nos. 60970034, 61170287, 61232016)
Recommended from our members
Leveraging digital forensics and data exploration to understand the creative work of a filmmaker: a case study of Stephen Dwoskin’s digital archive
This paper aims to establish digital forensics and data exploration as a methodology for supporting archival practice and research into a filmmaker's creative processes. We approach this by exploring the digital legacy hard drives of the late artist Stephen Dwoskin (1939-2012), who is recognised as an influential filmmaker at the forefront of the shift from analogue to digital film production. The research findings of this case study show that digital forensics is effective in extracting a timeline of hard drive activities, data that can be explored to reveal clues about the artist’s personal/professional history, stages of creative processes, and technical environment. The paper further demonstrates how this is related to current thinking around user-centred archival workflow and understanding of creative processes. The broader impact of the work for advancing digital archiving and research into creative processes is highlighted, concluding with a discussion of how, going forward, the approach can be coupled with deeper content analysis to reveal what influences editing choices taking place over time