7 research outputs found

    Cognitive visual tracking and camera control

    Get PDF
    Cognitive visual tracking is the process of observing and understanding the behaviour of a moving person. This paper presents an efficient solution to extract, in real-time, high-level information from an observed scene, and generate the most appropriate commands for a set of pan-tilt-zoom (PTZ) cameras in a surveillance scenario. Such a high-level feedback control loop, which is the main novelty of our work, will serve to reduce uncertainties in the observed scene and to maximize the amount of information extracted from it. It is implemented with a distributed camera system using SQL tables as virtual communication channels, and Situation Graph Trees for knowledge representation, inference and high-level camera control. A set of experiments in a surveillance scenario show the effectiveness of our approach and its potential for real applications of cognitive vision

    Dual sticky hierarchical Dirichlet process hidden Markov model and its application to natural language description of motions

    Get PDF
    In this paper, a new nonparametric Bayesian model called the dual sticky hierarchical Dirichlet process hidden Markov modle (HDP-HMM) is proposed for mining activities from a collection of time series data such as trajectories. All the time series data are clustered. Each cluster of time series data, corresponding to a motion pattern, is modeled by an HMM. Our model postulates a set of HMMs that share a common set of states (topics in an analogy with topic models for document processing), but have unique transition distributions. The number of HMMs and the number of topics are both automatically determined. The sticky prior avoids redundant states and makes our HDP-HMM more effective to model multimodal observations. For the application to motion trajectory modeling, topics correspond to motion activities. The learnt topics are clustered into atomic activities which are assigned predicates. We propose a Bayesian inference method to decompose a given trajectory into a sequence of atomic activities. The sources and sinks in the scene are learnt by clustering endpoints (origins and destinations of trajectories). The semantic motion regions are learnt using the points in trajectories. On combining the learnt sources and sinks, semantic motion regions, and the learnt sequences of atomic activities. the action represented by the trajectory can be described in natural language in as autometic a way as possible.The effectiveness of our dual sticky HDP-HMM is validated on several trajectory datasets. The effectiveness of the natural language descriptions for motions is demonstrated on the vehicle trajectories extracted from a traffic scene

    Fusion techniques for activity recognition using multi-camera networks

    Get PDF
    Real-time automatic activity recognition is an important area of research in the field of Computer Vision with plenty of applications in surveillance, gaming, entertainment and automobile safety. Because of advances in wireless networks and camera technologies, distributed camera networks are becoming more prominent. Distributed camera networks offer complimentary views of scenes and hence are better suited for real-time surveillance applications. They are robust to camera failures and in-complete field of views.;In a camera network, fusing information from multiple cameras is an important problem, especially when one doesn\u27t have knowledge of subjects orientation with respect to the camera and when arrangement of cameras is not symmetric. The objective of this dissertation is to design a information fusion technique for camera networks and to apply them in the contenxt of surveillance and safety applications (in coal-mines). (Abstract shortened by ProQuest.)

    A Survey on Visual Surveillance of Object Motion and Behaviors

    Full text link

    Agent orientated annotation in model based visual surveillance

    No full text

    Visual recognition of multi-agent action

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 1999.Includes bibliographical references (p. 167-184).Developing computer vision sensing systems that work robustly in everyday environments will require that the systems can recognize structured interaction between people and objects in the world. This document presents a new theory for the representation and recognition of coordinated multi-agent action from noisy perceptual data. The thesis of this work is as follows: highly structured, multi-agent action can be recognized from noisy perceptual data using visually grounded goal-based primitives and low-order temporal relationships that are integrated in a probabilistic framework. The theory is developed and evaluated by examining general characteristics of multi-agent action, analyzing tradeoffs involved when selecting a representation for multi-agent action recognition, and constructing a system to recognize multi-agent action for a real task from noisy data. The representation, which is motivated by work in model-based object recognition and probabilistic plan recognition, makes four principal assumptions: (1) the goals of individual agents are natural atomic representational units for specifying the temporal relationships between agents engaged in group activities, (2) a high-level description of temporal structure of the action using a small set of low-order temporal and logical constraints is adequate for representing the relationships between the agent goals for highly structured, multi-agent action recognition, (3) Bayesian networks provide a suitable mechanism for integrating multiple sources of uncertain visual perceptual feature evidence, and (4) an automatically generated Bayesian network can be used to combine uncertain temporal information and compute the likelihood that a set of object trajectory data is a particular multi-agent action. The recognition algorithm is tested using a database of American football play descriptions. A system is described that can recognize single-agent and multi-agent actions in this domain given noisy trajectories of object movements. The strengths and limitations of the recognition system are discussed and compared with other multi-agent recognition algorithms.by Stephen Sean Intille.Ph.D
    corecore