11,886 research outputs found

    Robust multi-camera tracking from schematic descriptions

    Get PDF
    Although monocular 2D tracking has been largely studied in the literature, it suffers from some inherent problems, mainly when handling persistent occlusions, that limit its performance in practical situations. Tracking methods combining observations from multiple cameras seem to solve these problems. However, most multi-camera systems require detailed information from each view, making it impossible their use in real networks with low transmission rate. In this paper, we present a robust multi-camera 3D tracking method which works on schematic descriptions of the observations performed by each camera of the system, allowing thus its performance in real surveillance networks. It is based on unspecific 2D detection systems working independently in each camera, whose results are smartly combined by means of a Bayesian association method based on geometry and color, allowing the 3D tracking of the objects of the scene with a Particle Filter. The tests performed show the excellent performance of the system, even correcting possible failures of the 2D processing modules

    Cognitive visual tracking and camera control

    Get PDF
    Cognitive visual tracking is the process of observing and understanding the behaviour of a moving person. This paper presents an efficient solution to extract, in real-time, high-level information from an observed scene, and generate the most appropriate commands for a set of pan-tilt-zoom (PTZ) cameras in a surveillance scenario. Such a high-level feedback control loop, which is the main novelty of our work, will serve to reduce uncertainties in the observed scene and to maximize the amount of information extracted from it. It is implemented with a distributed camera system using SQL tables as virtual communication channels, and Situation Graph Trees for knowledge representation, inference and high-level camera control. A set of experiments in a surveillance scenario show the effectiveness of our approach and its potential for real applications of cognitive vision

    Behavior interpretation from traffic video streams

    Get PDF
    Copyright © 2003 IEEEThis paper considers video surveillance research applied to traffic video streams. We present a framework for analyzing and recognizing different possible behaviors from image sequences acquired from a fixed camera. Two types of interactions have been mainly considered. In one there is interaction between two or more mobile objects in the field of view (FOV) of the camera. The other is interaction between a mobile object and static objects in the environment. The framework is based on two types of a priori knowledge: (1) the contextual knowledge of the camera's FOV, in terms of the description of the different static objects of the scene and (2) sets of predefined behaviors which need to be analyzed in different contexts. At present the system is designed to recognize behavior from stored videos and retrieve the frames in which the specific behaviors took place. We demonstrate successful behavior recognition results for pedestrian-vehicle interaction and vehicle-checkpost interactions

    Stanford Aerospace Research Laboratory research overview

    Get PDF
    Over the last ten years, the Stanford Aerospace Robotics Laboratory (ARL) has developed a hardware facility in which a number of space robotics issues have been, and continue to be, addressed. This paper reviews two of the current ARL research areas: navigation and control of free flying space robots, and modelling and control of extremely flexible space structures. The ARL has designed and built several semi-autonomous free-flying robots that perform numerous tasks in a zero-gravity, drag-free, two-dimensional environment. It is envisioned that future generations of these robots will be part of a human-robot team, in which the robots will operate under the task-level commands of astronauts. To make this possible, the ARL has developed a graphical user interface (GUI) with an intuitive object-level motion-direction capability. Using this interface, the ARL has demonstrated autonomous navigation, intercept and capture of moving and spinning objects, object transport, multiple-robot cooperative manipulation, and simple assemblies from both free-flying and fixed bases. The ARL has also built a number of experimental test beds on which the modelling and control of flexible manipulators has been studied. Early ARL experiments in this arena demonstrated for the first time the capability to control the end-point position of both single-link and multi-link flexible manipulators using end-point sensing. Building on these accomplishments, the ARL has been able to control payloads with unknown dynamics at the end of a flexible manipulator, and to achieve high-performance control of a multi-link flexible manipulator

    Framework for real time behavior interpretation from traffic video

    Get PDF
    © 2005 IEEE.Video-based surveillance systems have a wide range of applications for traffic monitoring, as they provide more information as compared to other sensors. In this paper, we present a rule-based framework for behavior and activity detection in traffic videos obtained from stationary video cameras. Moving targets are segmented from the images and tracked in real time. These are classified into different categories using a novel Bayesian network approach, which makes use of image features and image-sequence- based tracking results for robust classification. Tracking and classification results are used in a programmed context to analyze behavior. For behavior recognition, two types of interactions have mainly been considered. One is interaction between two or more mobile targets in the field of view (FoV) of the camera. The other is interaction between targets and stationary objects in the environment. The framework is based on two types of a priori information: 1) the contextual information of the camera’s FoV, in terms of the different stationary objects in the scene and 2) sets of predefined behavior scenarios, which need to be analyzed in different contexts. The system can recognize behavior from videos and give a lexical output of the detected behavior. It also is capable of handling uncertainties that arise due to errors in visual signal processing. We demonstrate successful behavior recognition results for pedestrian– vehicle interaction and vehicle–checkpost interactions.Kumar, P.; Ranganath, S.; Huang Weimin; Sengupta, K

    MIMO PID Controller Tuning Method for Quadrotor Based on LQR/LQG Theory

    Get PDF
    In this work, a new pre-tuning multivariable PID (Proportional Integral Derivative) controllers method for quadrotors is put forward. A procedure based on LQR/LQG (Linear Quadratic Regulator/Gaussian) theory is proposed for attitude and altitude control, which suposes a considerable simplification of the design problem due to only one pretuning parameter being used. With the aim to analyze the performance and robustness of the proposed method, a non-linear mathematical model of the DJI-F450 quadrotor is employed, where rotors dynamics, together with sensors drift/bias properties and noise characteristics of low-cost commercial sensors typically used in this type of applications are considered. In order to estimate the state vector and compensate bias/drift effects in the measures, a combination of filtering and data fusion algorithms (Kalman filter and Madgwick algorithm for attitude estimation) are proposed and implemented. Performance and robustness analysis of the control system is carried out by employing numerical simulations, which take into account the presence of uncertainty in the plant model and external disturbances. The obtained results show the proposed controller design method for multivariable PID controller is robust with respect to: (a) parametric uncertainty in the plant model, (b) disturbances acting at the plant input, (c) sensors measurement and estimation errors
    corecore