11,167 research outputs found

    A distributed camera system for multi-resolution surveillance

    Get PDF
    We describe an architecture for a multi-camera, multi-resolution surveillance system. The aim is to support a set of distributed static and pan-tilt-zoom (PTZ) cameras and visual tracking algorithms, together with a central supervisor unit. Each camera (and possibly pan-tilt device) has a dedicated process and processor. Asynchronous interprocess communications and archiving of data are achieved in a simple and effective way via a central repository, implemented using an SQL database. Visual tracking data from static views are stored dynamically into tables in the database via client calls to the SQL server. A supervisor process running on the SQL server determines if active zoom cameras should be dispatched to observe a particular target, and this message is effected via writing demands into another database table. We show results from a real implementation of the system comprising one static camera overviewing the environment under consideration and a PTZ camera operating under closed-loop velocity control, which uses a fast and robust level-set-based region tracker. Experiments demonstrate the effectiveness of our approach and its feasibility to multi-camera systems for intelligent surveillance

    A graphical model based solution to the facial feature point tracking problem

    Get PDF
    In this paper a facial feature point tracker that is motivated by applications such as human-computer interfaces and facial expression analysis systems is proposed. The proposed tracker is based on a graphical model framework. The facial features are tracked through video streams by incorporating statistical relations in time as well as spatial relations between feature points. By exploiting the spatial relationships between feature points, the proposed method provides robustness in real-world conditions such as arbitrary head movements and occlusions. A Gabor feature-based occlusion detector is developed and used to handle occlusions. The performance of the proposed tracker has been evaluated on real video data under various conditions including occluded facial gestures and head movements. It is also compared to two popular methods, one based on Kalman filtering exploiting temporal relations, and the other based on active appearance models (AAM). Improvements provided by the proposed approach are demonstrated through both visual displays and quantitative analysis

    People tracking by cooperative fusion of RADAR and camera sensors

    Get PDF
    Accurate 3D tracking of objects from monocular camera poses challenges due to the loss of depth during projection. Although ranging by RADAR has proven effective in highway environments, people tracking remains beyond the capability of single sensor systems. In this paper, we propose a cooperative RADAR-camera fusion method for people tracking on the ground plane. Using average person height, joint detection likelihood is calculated by back-projecting detections from the camera onto the RADAR Range-Azimuth data. Peaks in the joint likelihood, representing candidate targets, are fed into a Particle Filter tracker. Depending on the association outcome, particles are updated using the associated detections (Tracking by Detection), or by sampling the raw likelihood itself (Tracking Before Detection). Utilizing the raw likelihood data has the advantage that lost targets are continuously tracked even if the camera or RADAR signal is below the detection threshold. We show that in single target, uncluttered environments, the proposed method entirely outperforms camera-only tracking. Experiments in a real-world urban environment also confirm that the cooperative fusion tracker produces significantly better estimates, even in difficult and ambiguous situations

    A Flexible Image Processing Framework for Vision-based Navigation Using Monocular Image Sensors

    Get PDF
    On-Orbit Servicing (OOS) encompasses all operations related to servicing satellites and performing other work on-orbit, such as reduction of space debris. Servicing satellites includes repairs, refueling, attitude control and other tasks, which may be needed to put a failed satellite back into working condition. A servicing satellite requires accurate position and orientation (pose) information about the target spacecraft. A large quantity of different sensor families is available to accommodate this need. However, when it comes to minimizing mass, space and power required for a sensor system, mostly monocular imaging sensors perform very well. A disadvantage is- when comparing to LIDAR sensors- that costly computations are needed to process the data of the sensor. The method presented in this paper is addressing these problems by aiming to implement three different design principles; First: keep the computational burden as low as possible. Second: utilize different algorithms and choose among them, depending on the situation, to retrieve the most stable results. Third: Stay modular and flexible. The software is designed primarily for utilization in On-Orbit Servicing tasks, where- for example- a servicer spacecraft approaches an uncooperative client spacecraft, which can not aid in the process in any way as it is assumed to be completely passive. Image processing is used for navigating to the client spacecraft. In this specific scenario, it is vital to obtain accurate distance and bearing information until, in the last few meters, all six degrees of freedom are needed to be known. The smaller the distance between the spacecraft, the more accurate pose estimates are required. The algorithms used here are tested and optimized on a sophisticated Rendezvous and Docking Simulation facility (European Proximity Operations Simulator- EPOS 2.0) in its second-generation form located at the German Space Operations Center (GSOC) in Weßling, Germany. This particular simulation environment is real-time capable and provides an interface to test sensor system hardware in closed loop configuration. The results from these tests are summarized in the paper as well. Finally, an outlook on future work is given, with the intention of providing some long-term goals as the paper is presenting a snapshot of ongoing, by far not yet completed work. Moreover, it serves as an overview of additions which can improve the presented method further

    Thermo-visual feature fusion for object tracking using multiple spatiogram trackers

    Get PDF
    In this paper, we propose a framework that can efficiently combine features for robust tracking based on fusing the outputs of multiple spatiogram trackers. This is achieved without the exponential increase in storage and processing that other multimodal tracking approaches suffer from. The framework allows the features to be split arbitrarily between the trackers, as well as providing the flexibility to add, remove or dynamically weight features. We derive a mean-shift type algorithm for the framework that allows efficient object tracking with very low computational overhead. We especially target the fusion of thermal infrared and visible spectrum features as the most useful features for automated surveillance applications. Results are shown on multimodal video sequences clearly illustrating the benefits of combining multiple features using our framework

    The sensing and perception subsystem of the NASA research telerobot

    Get PDF
    A useful space telerobot for on-orbit assembly, maintenance, and repair tasks must have a sensing and perception subsystem which can provide the locations, orientations, and velocities of all relevant objects in the work environment. This function must be accomplished with sufficient speed and accuracy to permit effective grappling and manipulation. Appropriate symbolic names must be attached to each object for use by higher-level planning algorithms. Sensor data and inferences must be presented to the remote human operator in a way that is both comprehensible in ensuring safe autonomous operation and useful for direct teleoperation. Research at JPL toward these objectives is described
    corecore