331 research outputs found

    Using inactivity to detect unusual behavior

    No full text
    We present a novel method for detecting unusual modes of behavior in video surveillance data, suitable for supporting home-based care of elderly patients. Our approach is based on detecting unusual patterns of inactivity. We first learn a spatial map of normal inactivity for an observedscene, expressed as a two-dimensional mixture of Gaus-sians. The map components are used to construct a HiddenMarkov Model representing normal patterns of behavior. Athreshold model is also inferred, and unusual behavior de-tected by comparing the model likelihoods. Our learning procedures are unsupervised, and yield a highly transparent model of scene activity. We present an evaluation of our pproach, and show that it is effective in detecting unusual bhavior across a range of parameter settings

    A Statistical Video Content Recognition Method Using Invariant Features on Object Trajectories

    Full text link

    Semi-supervised Adapted HMMs for Unusual Event Detection

    Get PDF
    We address the problem of temporal unusual event detection. Unusual events are characterized by a number of features (rarity, unexpectedness, and relevance) that limit the application of traditional supervised model-based approaches. We propose a semi-supervised adapted Hidden Markov Model (HMM) framework, in which usual event models are first learned from a large amount of (commonly available) training data, while unusual event models are learned by Bayesian adaptation in an unsupervised manner. The proposed framework has an iterative structure, which adapts a new unusual event model at each iteration. We show that such a framework can address problems due to the scarcity of training data and the difficulty in pre-defining unusual events. Experiments on audio, visual, and audio-visual data streams illustrate its effectiveness, compared with both supervised and unsupervised baseline methods

    Domain anomaly detection in machine perception: a system architecture and taxonomy

    Get PDF
    We address the problem of anomaly detection in machine perception. The concept of domain anomaly is introduced as distinct from the conventional notion of anomaly used in the literature. We propose a unified framework for anomaly detection which exposes the multifacetted nature of anomalies and suggest effective mechanisms for identifying and distinguishing each facet as instruments for domain anomaly detection. The framework draws on the Bayesian probabilistic reasoning apparatus which clearly defines concepts such as outlier, noise, distribution drift, novelty detection (object, object primitive), rare events, and unexpected events. Based on these concepts we provide a taxonomy of domain anomaly events. One of the mechanisms helping to pinpoint the nature of anomaly is based on detecting incongruence between contextual and noncontextual sensor(y) data interpretation. The proposed methodology has wide applicability. It underpins in a unified way the anomaly detection applications found in the literature

    State Space Approaches for Modeling Activities in Video Streams

    Get PDF
    The objective is to discern events and behavior in activities using video sequences, which conform to common human experience. It has several applications such as recognition, temporal segmentation, video indexing and anomaly detection. Activity modeling offers compelling challenges to computational vision systems at several levels ranging from low-level vision tasks for detection and segmentation to high-level models for extracting perceptually salient information. With a focus on the latter, the following approaches are presented: event detection in discrete state space, epitomic representation in continuous state space, temporal segmentation using mixed state models, key frame detection using antieigenvalues and spatio-temporal activity volumes. Significant changes in motion properties are said to be events. We present an event probability sequence representation in which the probability of event occurrence is computed using stable changes at the state level of the discrete state hidden Markov model that generates the observed trajectories. Reliance on a trained model however, can be a limitation. A data-driven antieigenvalue-based approach is proposed for detecting changes. Antieigenvalues are sensitive to turnings whereas eigenvalues capture directions of maximum variance in the data. In both these approaches, events are assumed to be instantaneous quantities. This is relaxed using an epitomic representation in continuous state space. Video sequences are segmented using a sliding window within which the dynamics of each object is assumed to be linear. The system matrix, initial state value and the input signal statistics are said to form an epitome. The system matrices are decomposed using the Iwasawa matrix decomposition to isolate the effect of rotation, scaling and projection of the state vector. It is used to compute physically meaningful distances between epitomes. Epitomes reveal dominant primitives of activities that have an abstracted interpretation. A mixed state approach for activities is presented in which higher-level primitives of behavior is encoded in the discrete state component and observed dynamics in the continuous state component. The effectiveness of mixed state models is demonstrated using temporal segmentation. In addition to motion trajectories, the volume carved out in an xyt cube by a moving object is characterized using Morse functions

    Recognising high-level agent behaviour through observations in data scarce domains

    Get PDF
    This thesis presents a novel method for performing multi-agent behaviour recognition without requiring large training corpora. The reduced need for data means that robust probabilistic recognition can be performed within domains where annotated datasets are traditionally unavailable (e.g. surveillance, defence). Human behaviours are composed from sequences of underlying activities that can be used as salient features. We do not assume that the exact temporal ordering of such features is necessary, so can represent behaviours using an unordered “bag-of-features”. A weak temporal ordering is imposed during inference to match behaviours to observations and replaces the learnt model parameters used by competing methods. Our three-tier architecture comprises low-level video tracking, event analysis and high-level inference. High-level inference is performed using a new, cascading extension of the Rao-Blackwellised Particle Filter. Behaviours are recognised at multiple levels of abstraction and can contain a mixture of solo and multiagent behaviour. We validate our framework using the PETS 2006 video surveillance dataset and our own video sequences, in addition to a large corpus of simulated data. We achieve a mean recognition precision of 96.4% on the simulated data and 89.3% on the combined video data. Our “bag-of-features” framework is able to detect when behaviours terminate and accurately explains agent behaviour despite significant quantities of low-level classification errors in the input, and can even detect agents who change their behaviour
    corecore