2 research outputs found

    BORIS: a free, versatile open-source event-logging software for video/audio coding and live observations

    Get PDF
    Summary Quantitative aspects of the study of animal and human behaviour are increasingly relevant to test hypotheses and find empirical support for them. At the same time, photo and video cameras can store a large number of video recordings and are often used to monitor the subjects remotely. Researchers frequently face the need to code considerable quantities of video recordings with relatively flexible software, often constrained by species‐specific options or exact settings. BORIS is a free, open‐source and multiplatform standalone program that allows a user‐specific coding environment to be set for a computer‐based review of previously recorded videos or live observations. Being open to user‐specific settings, the program allows a project‐based ethogram to be defined that can then be shared with collaborators, or can be imported or modified. Projects created in BORIS can include a list of observations, and each observation may include one or two videos (e.g. simultaneous screening of visual stimuli and the subject being tested; recordings from different sides of an aquarium). Once the user has set an ethogram, including state or point events or both, coding can be performed using previously assigned keys on the computer keyboard. BORIS allows definition of an unlimited number of events (states/point events) and subjects. Once the coding process is completed, the program can extract a time‐budget or single or grouped observations automatically and present an at‐a‐glance summary of the main behavioural features. The observation data and time‐budget analysis can be exported in many common formats (TSV, CSV, ODF, XLS, SQL and JSON). The observed events can be plotted and exported in various graphic formats (SVG, PNG, JPG, TIFF, EPS and PDF)

    Video anomaly detection and localization by local motion based joint video representation and OCELM

    Get PDF
    Nowadays, human-based video analysis becomes increasingly exhausting due to the ubiquitous use of surveillance cameras and explosive growth of video data. This paper proposes a novel approach to detect and localize video anomalies automatically. For video feature extraction, video volumes are jointly represented by two novel local motion based video descriptors, SL-HOF and ULGP-OF. SL-HOF descriptor captures the spatial distribution information of 3D local regions’ motion in the spatio-temporal cuboid extracted from video, which can implicitly reflect the structural information of foreground and depict foreground motion more precisely than the normal HOF descriptor. To locate the video foreground more accurately, we propose a new Robust PCA based foreground localization scheme. ULGP-OF descriptor, which seamlessly combines the classic 2D texture descriptor LGP and optical flow, is proposed to describe the motion statistics of local region texture in the areas located by the foreground localization scheme. Both SL-HOF and ULGP-OF are shown to be more discriminative than existing video descriptors in anomaly detection. To model features of normal video events, we introduce the newly-emergent one-class Extreme Learning Machine (OCELM) as the data description algorithm. With a tremendous reduction in training time, OCELM can yield comparable or better performance than existing algorithms like the classic OCSVM, which makes our approach easier for model updating and more applicable to fast learning from the rapidly generated surveillance data. The proposed approach is tested on UCSD ped1, ped2 and UMN datasets, and experimental results show that our approach can achieve state-of-the-art results in both video anomaly detection and localization task.This work was supported by the National Natural Science Foundation of China (Project nos. 60970034, 61170287, 61232016)
    corecore