3,511 research outputs found

    Information theoretic sensor management for multi-target tracking with a single pan-tilt-zoom camera

    Full text link
    Automatic multiple target tracking with pan-tilt-zoom (PTZ) cameras is a hard task, with few approaches in the lit-erature, most of them proposing simplistic scenarios. In this paper, we present a PTZ camera management framework which lies on information theoretic principles: at each time step, the next camera pose (pan, tilt, focal length) is chosen, according to a policy which ensures maximum information gain. The formulation takes into account occlusions, phys-ical extension of targets, realistic pedestrian detectors and the mechanical constraints of the camera. Convincing com-parative results on synthetic data, realistic simulations and the implementation on a real video surveillance camera val-idate the effectiveness of the proposed method. 1

    Object Tracking with a pan-tilt-zoom camera : application to car driving assistance

    Get PDF
    International audienceIn this paper, visual perception in car driving assistance is considered. The work deals with the development of a system combining a pan-tilt-zoom (PTZ) camera and a standard camera, in order to track the front vehicles. The standard camera has a small focal length, and is devoted to the analyse of the whole frontal scene. Here, the PTZ camera is used to track the closest vehicle. Camera rotations and zoom are controlled by visual servoing and by an efficient real time target tracking algorithm. The aim of this work is to keep the rear view image of target vehicle stable in scale and position. The methods presented were tested on real road sequences within the VELAC demonstration vehicle. Experimental results show the effectiveness of such an approach

    Non-myopic information theoretic sensor management of a single pan\u2013tilt\u2013zoom camera for multiple object detection and tracking

    Get PDF
    Detailed derivation of an information theoretic framework for real PTZ management.Introduction and implementation of a non-myopic strategy.Large experimental validation, with synthetic and realistic datasets.Working demonstration of myopic strategy on an off-the-shelf PTZ camera. Automatic multiple object tracking with a single pan-tilt-zoom (PTZ) cameras is a hard task, with few approaches in the literature, most of them proposing simplistic scenarios. In this paper, we present a novel PTZ camera management framework in which at each time step, the next camera pose (pan, tilt, focal length) is chosen to support multiple object tracking. The policy can be myopic or non-myopic, where the former analyzes exclusively the current frame for deciding the next camera pose, while the latter takes into account plausible future target displacements and camera poses, through a multiple look-ahead optimization. In both cases, occlusions, a variable number of subjects and genuine pedestrian detectors are taken into account, for the first time in the literature. Convincing comparative results on synthetic data, realistic simulations and real trials validate our proposal, showing that non-myopic strategies are particularly suited for a PTZ camera management

    Zoom techniques for achieving scale invariant object tracking in real-time active vision systems

    Get PDF
    In a surveillance system, a camera operator follows an object of interest by moving the camera, then gains additional information about the object by zooming. As the active vision field advances, the ability to automate such a system is nearing fruition. One hurdle limiting the use of object recognition algorithms in real-time systems is the quality of captured imagery; recognition algorithms often have strict scale and position requirements where if those parameters are not met, the performance rapidly degrades to failure. The ability of an automatic fixation system to capture quality video of an accelerating target is directly related to the response time of the mechanical pan, tilt, and zoom platform—however the price of such a platform rises with its performance. The goal of this work is to create a system that provides scale-invariant tracking using inexpensive off-the-shelf components. Since optical zoom acts as a measurement gain, amplifying both resolution and tracking error, a new second camera with fixed focal length assists the zooming camera if it loses fixation—effectively clipping error. Furthermore, digital zoom adjusts the captured image to ensure position and scale invariance for the higher-level application. The implemented system uses two Sony EVI-D100 cameras on a 2.8GHz Dual Pentium Xeon PC. This work presents experiments to exhibit the effectiveness of the system

    A distributed camera system for multi-resolution surveillance

    Get PDF
    We describe an architecture for a multi-camera, multi-resolution surveillance system. The aim is to support a set of distributed static and pan-tilt-zoom (PTZ) cameras and visual tracking algorithms, together with a central supervisor unit. Each camera (and possibly pan-tilt device) has a dedicated process and processor. Asynchronous interprocess communications and archiving of data are achieved in a simple and effective way via a central repository, implemented using an SQL database. Visual tracking data from static views are stored dynamically into tables in the database via client calls to the SQL server. A supervisor process running on the SQL server determines if active zoom cameras should be dispatched to observe a particular target, and this message is effected via writing demands into another database table. We show results from a real implementation of the system comprising one static camera overviewing the environment under consideration and a PTZ camera operating under closed-loop velocity control, which uses a fast and robust level-set-based region tracker. Experiments demonstrate the effectiveness of our approach and its feasibility to multi-camera systems for intelligent surveillance

    Reproducible Evaluation of Pan-Tilt-Zoom Tracking

    Get PDF
    Tracking with a Pan-Tilt-Zoom (PTZ) camera has been a research topic in computer vision for many years. However, it is very difficult to assess the progress that has been made on this topic because there is no standard evaluation methodology. The difficulty in evaluating PTZ tracking algorithms arises from their dynamic nature. In contrast to other forms of tracking, PTZ tracking involves both locating the target in the image and controlling the motors of the camera to aim it so that the target stays in its field of view. This type of tracking can only be performed online. In this paper, we propose a new evaluation framework based on a virtual PTZ camera. With this framework, tracking scenarios do not change for each experiment and we are able to replicate online PTZ camera control and behavior including camera positioning delays, tracker processing delays, and numerical zoom. We tested our evaluation framework with the Camshift tracker to show its viability and to establish baseline results.Comment: This is an extended version of the 2015 ICIP paper "Reproducible Evaluation of Pan-Tilt-Zoom Tracking

    Evaluation of trackers for Pan-Tilt-Zoom Scenarios

    Full text link
    Tracking with a Pan-Tilt-Zoom (PTZ) camera has been a research topic in computer vision for many years. Compared to tracking with a still camera, the images captured with a PTZ camera are highly dynamic in nature because the camera can perform large motion resulting in quickly changing capture conditions. Furthermore, tracking with a PTZ camera involves camera control to position the camera on the target. For successful tracking and camera control, the tracker must be fast enough, or has to be able to predict accurately the next position of the target. Therefore, standard benchmarks do not allow to assess properly the quality of a tracker for the PTZ scenario. In this work, we use a virtual PTZ framework to evaluate different tracking algorithms and compare their performances. We also extend the framework to add target position prediction for the next frame, accounting for camera motion and processing delays. By doing this, we can assess if predicting can make long-term tracking more robust as it may help slower algorithms for keeping the target in the field of view of the camera. Results confirm that both speed and robustness are required for tracking under the PTZ scenario.Comment: 6 pages, 2 figures, International Conference on Pattern Recognition and Artificial Intelligence 201

    Cognitive visual tracking and camera control

    Get PDF
    Cognitive visual tracking is the process of observing and understanding the behaviour of a moving person. This paper presents an efficient solution to extract, in real-time, high-level information from an observed scene, and generate the most appropriate commands for a set of pan-tilt-zoom (PTZ) cameras in a surveillance scenario. Such a high-level feedback control loop, which is the main novelty of our work, will serve to reduce uncertainties in the observed scene and to maximize the amount of information extracted from it. It is implemented with a distributed camera system using SQL tables as virtual communication channels, and Situation Graph Trees for knowledge representation, inference and high-level camera control. A set of experiments in a surveillance scenario show the effectiveness of our approach and its potential for real applications of cognitive vision

    An electronic pan/tilt/zoom camera system

    Get PDF
    A camera system for omnidirectional image viewing applications that provides pan, tilt, zoom, and rotational orientation within a hemispherical field of view (FOV) using no moving parts was developed. The imaging device is based on the effect that from a fisheye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high speed electronic circuitry. An incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical FOV without the need for any mechanical mechanisms. A programmable transformation processor provides flexible control over viewing situations. Multiple images, each with different image magnifications and pan tilt rotation parameters, can be obtained from a single camera. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment
    corecore