36,806 research outputs found

    State Space Approaches for Modeling Activities in Video Streams

    Get PDF
    The objective is to discern events and behavior in activities using video sequences, which conform to common human experience. It has several applications such as recognition, temporal segmentation, video indexing and anomaly detection. Activity modeling offers compelling challenges to computational vision systems at several levels ranging from low-level vision tasks for detection and segmentation to high-level models for extracting perceptually salient information. With a focus on the latter, the following approaches are presented: event detection in discrete state space, epitomic representation in continuous state space, temporal segmentation using mixed state models, key frame detection using antieigenvalues and spatio-temporal activity volumes. Significant changes in motion properties are said to be events. We present an event probability sequence representation in which the probability of event occurrence is computed using stable changes at the state level of the discrete state hidden Markov model that generates the observed trajectories. Reliance on a trained model however, can be a limitation. A data-driven antieigenvalue-based approach is proposed for detecting changes. Antieigenvalues are sensitive to turnings whereas eigenvalues capture directions of maximum variance in the data. In both these approaches, events are assumed to be instantaneous quantities. This is relaxed using an epitomic representation in continuous state space. Video sequences are segmented using a sliding window within which the dynamics of each object is assumed to be linear. The system matrix, initial state value and the input signal statistics are said to form an epitome. The system matrices are decomposed using the Iwasawa matrix decomposition to isolate the effect of rotation, scaling and projection of the state vector. It is used to compute physically meaningful distances between epitomes. Epitomes reveal dominant primitives of activities that have an abstracted interpretation. A mixed state approach for activities is presented in which higher-level primitives of behavior is encoded in the discrete state component and observed dynamics in the continuous state component. The effectiveness of mixed state models is demonstrated using temporal segmentation. In addition to motion trajectories, the volume carved out in an xyt cube by a moving object is characterized using Morse functions

    Attentive monitoring of multiple video streams driven by a Bayesian foraging strategy

    Full text link
    In this paper we shall consider the problem of deploying attention to subsets of the video streams for collating the most relevant data and information of interest related to a given task. We formalize this monitoring problem as a foraging problem. We propose a probabilistic framework to model observer's attentive behavior as the behavior of a forager. The forager, moment to moment, focuses its attention on the most informative stream/camera, detects interesting objects or activities, or switches to a more profitable stream. The approach proposed here is suitable to be exploited for multi-stream video summarization. Meanwhile, it can serve as a preliminary step for more sophisticated video surveillance, e.g. activity and behavior analysis. Experimental results achieved on the UCR Videoweb Activities Dataset, a publicly available dataset, are presented to illustrate the utility of the proposed technique.Comment: Accepted to IEEE Transactions on Image Processin

    Video Stream Retrieval of Unseen Queries using Semantic Memory

    Get PDF
    Retrieval of live, user-broadcast video streams is an under-addressed and increasingly relevant challenge. The on-line nature of the problem requires temporal evaluation and the unforeseeable scope of potential queries motivates an approach which can accommodate arbitrary search queries. To account for the breadth of possible queries, we adopt a no-example approach to query retrieval, which uses a query's semantic relatedness to pre-trained concept classifiers. To adapt to shifting video content, we propose memory pooling and memory welling methods that favor recent information over long past content. We identify two stream retrieval tasks, instantaneous retrieval at any particular time and continuous retrieval over a prolonged duration, and propose means for evaluating them. Three large scale video datasets are adapted to the challenge of stream retrieval. We report results for our search methods on the new stream retrieval tasks, as well as demonstrate their efficacy in a traditional, non-streaming video task.Comment: Presented at BMVC 2016, British Machine Vision Conference, 201

    Second-order Temporal Pooling for Action Recognition

    Full text link
    Deep learning models for video-based action recognition usually generate features for short clips (consisting of a few frames); such clip-level features are aggregated to video-level representations by computing statistics on these features. Typically zero-th (max) or the first-order (average) statistics are used. In this paper, we explore the benefits of using second-order statistics. Specifically, we propose a novel end-to-end learnable feature aggregation scheme, dubbed temporal correlation pooling that generates an action descriptor for a video sequence by capturing the similarities between the temporal evolution of clip-level CNN features computed across the video. Such a descriptor, while being computationally cheap, also naturally encodes the co-activations of multiple CNN features, thereby providing a richer characterization of actions than their first-order counterparts. We also propose higher-order extensions of this scheme by computing correlations after embedding the CNN features in a reproducing kernel Hilbert space. We provide experiments on benchmark datasets such as HMDB-51 and UCF-101, fine-grained datasets such as MPII Cooking activities and JHMDB, as well as the recent Kinetics-600. Our results demonstrate the advantages of higher-order pooling schemes that when combined with hand-crafted features (as is standard practice) achieves state-of-the-art accuracy.Comment: Accepted in the International Journal of Computer Vision (IJCV

    Deep Affordance-grounded Sensorimotor Object Recognition

    Full text link
    It is well-established by cognitive neuroscience that human perception of objects constitutes a complex process, where object appearance information is combined with evidence about the so-called object "affordances", namely the types of actions that humans typically perform when interacting with them. This fact has recently motivated the "sensorimotor" approach to the challenging task of automatic object recognition, where both information sources are fused to improve robustness. In this work, the aforementioned paradigm is adopted, surpassing current limitations of sensorimotor object recognition research. Specifically, the deep learning paradigm is introduced to the problem for the first time, developing a number of novel neuro-biologically and neuro-physiologically inspired architectures that utilize state-of-the-art neural networks for fusing the available information sources in multiple ways. The proposed methods are evaluated using a large RGB-D corpus, which is specifically collected for the task of sensorimotor object recognition and is made publicly available. Experimental results demonstrate the utility of affordance information to object recognition, achieving an up to 29% relative error reduction by its inclusion.Comment: 9 pages, 7 figures, dataset link included, accepted to CVPR 201
    • …
    corecore