11,727 research outputs found

    People tracking by cooperative fusion of RADAR and camera sensors

    Get PDF
    Accurate 3D tracking of objects from monocular camera poses challenges due to the loss of depth during projection. Although ranging by RADAR has proven effective in highway environments, people tracking remains beyond the capability of single sensor systems. In this paper, we propose a cooperative RADAR-camera fusion method for people tracking on the ground plane. Using average person height, joint detection likelihood is calculated by back-projecting detections from the camera onto the RADAR Range-Azimuth data. Peaks in the joint likelihood, representing candidate targets, are fed into a Particle Filter tracker. Depending on the association outcome, particles are updated using the associated detections (Tracking by Detection), or by sampling the raw likelihood itself (Tracking Before Detection). Utilizing the raw likelihood data has the advantage that lost targets are continuously tracked even if the camera or RADAR signal is below the detection threshold. We show that in single target, uncluttered environments, the proposed method entirely outperforms camera-only tracking. Experiments in a real-world urban environment also confirm that the cooperative fusion tracker produces significantly better estimates, even in difficult and ambiguous situations

    Data association and occlusion handling for vision-based people tracking by mobile robots

    Get PDF
    This paper presents an approach for tracking multiple persons on a mobile robot with a combination of colour and thermal vision sensors, using several new techniques. First, an adaptive colour model is incorporated into the measurement model of the tracker. Second, a new approach for detecting occlusions is introduced, using a machine learning classifier for pairwise comparison of persons (classifying which one is in front of the other). Third, explicit occlusion handling is incorporated into the tracker. The paper presents a comprehensive, quantitative evaluation of the whole system and its different components using several real world data sets

    Distributed Object Tracking Using a Cluster-Based Kalman Filter in Wireless Camera Networks

    Get PDF
    Local data aggregation is an effective means to save sensor node energy and prolong the lifespan of wireless sensor networks. However, when a sensor network is used to track moving objects, the task of local data aggregation in the network presents a new set of challenges, such as the necessity to estimate, usually in real time, the constantly changing state of the target based on information acquired by the nodes at different time instants. To address these issues, we propose a distributed object tracking system which employs a cluster-based Kalman filter in a network of wireless cameras. When a target is detected, cameras that can observe the same target interact with one another to form a cluster and elect a cluster head. Local measurements of the target acquired by members of the cluster are sent to the cluster head, which then estimates the target position via Kalman filtering and periodically transmits this information to a base station. The underlying clustering protocol allows the current state and uncertainty of the target position to be easily handed off among clusters as the object is being tracked. This allows Kalman filter-based object tracking to be carried out in a distributed manner. An extended Kalman filter is necessary since measurements acquired by the cameras are related to the actual position of the target by nonlinear transformations. In addition, in order to take into consideration the time uncertainty in the measurements acquired by the different cameras, it is necessary to introduce nonlinearity in the system dynamics. Our object tracking protocol requires the transmission of significantly fewer messages than a centralized tracker that naively transmits all of the local measurements to the base station. It is also more accurate than a decentralized tracker that employs linear interpolation for local data aggregation. Besides, the protocol is able to perform real-time estimation because our implementation takes into consideration the sparsit- - y of the matrices involved in the problem. The experimental results show that our distributed object tracking protocol is able to achieve tracking accuracy comparable to the centralized tracking method, while requiring a significantly smaller number of message transmissions in the network

    Hybrid Poisson and multi-Bernoulli filters

    Full text link
    The probability hypothesis density (PHD) and multi-target multi-Bernoulli (MeMBer) filters are two leading algorithms that have emerged from random finite sets (RFS). In this paper we study a method which combines these two approaches. Our work is motivated by a sister paper, which proves that the full Bayes RFS filter naturally incorporates a Poisson component representing targets that have never been detected, and a linear combination of multi-Bernoulli components representing targets under track. Here we demonstrate the benefit (in speed of track initiation) that maintenance of a Poisson component of undetected targets provides. Subsequently, we propose a method of recycling, which projects Bernoulli components with a low probability of existence onto the Poisson component (as opposed to deleting them). We show that this allows us to achieve similar tracking performance using a fraction of the number of Bernoulli components (i.e., tracks).Comment: Submitted to 15th International Conference on Information Fusion (2012

    Improved data association and occlusion handling for vision-based people tracking by mobile robots

    Get PDF
    This paper presents an approach for tracking multiple persons using a combination of colour and thermal vision sensors on a mobile robot. First, an adaptive colour model is incorporated into the measurement model of the tracker. Second, a new approach for detecting occlusions is introduced, using a machine learning classifier for pairwise comparison of persons (classifying which one is in front of the other). Third, explicit occlusion handling is then incorporated into the tracker

    Tracking Target Signal Strengths on a Grid using Sparsity

    Get PDF
    Multi-target tracking is mainly challenged by the nonlinearity present in the measurement equation, and the difficulty in fast and accurate data association. To overcome these challenges, the present paper introduces a grid-based model in which the state captures target signal strengths on a known spatial grid (TSSG). This model leads to \emph{linear} state and measurement equations, which bypass data association and can afford state estimation via sparsity-aware Kalman filtering (KF). Leveraging the grid-induced sparsity of the novel model, two types of sparsity-cognizant TSSG-KF trackers are developed: one effects sparsity through 1\ell_1-norm regularization, and the other invokes sparsity as an extra measurement. Iterative extended KF and Gauss-Newton algorithms are developed for reduced-complexity tracking, along with accurate error covariance updates for assessing performance of the resultant sparsity-aware state estimators. Based on TSSG state estimates, more informative target position and track estimates can be obtained in a follow-up step, ensuring that track association and position estimation errors do not propagate back into TSSG state estimates. The novel TSSG trackers do not require knowing the number of targets or their signal strengths, and exhibit considerably lower complexity than the benchmark hidden Markov model filter, especially for a large number of targets. Numerical simulations demonstrate that sparsity-cognizant trackers enjoy improved root mean-square error performance at reduced complexity when compared to their sparsity-agnostic counterparts.Comment: Submitted to IEEE Trans. on Signal Processin

    An efficient message passing algorithm for multi-target tracking

    Get PDF
    We propose a new approach for multi-sensor multi-target tracking by constructing statistical models on graphs with continuous-valued nodes for target states and discrete-valued nodes for data association hypotheses. These graphical representations lead to message-passing algorithms for the fusion of data across time, sensor, and target that are radically different than algorithms such as those found in state-of-the-art multiple hypothesis tracking (MHT) algorithms. Important differences include: (a) our message-passing algorithms explicitly compute different probabilities and estimates than MHT algorithms; (b) our algorithms propagate information from future data about past hypotheses via messages backward in time (rather than doing this via extending track hypothesis trees forward in time); and (c) the combinatorial complexity of the problem is manifested in a different way, one in which particle-like, approximated, messages are propagated forward and backward in time (rather than hypotheses being enumerated and truncated over time). A side benefit of this structure is that it automatically provides smoothed target trajectories using future data. A major advantage is the potential for low-order polynomial (and linear in some cases) dependency on the length of the tracking interval N, in contrast with the exponential complexity in N for so-called N-scan algorithms. We provide experimental results that support this potential. As a result, we can afford to use longer tracking intervals, allowing us to incorporate out-of-sequence data seamlessly and to conduct track-stitching when future data provide evidence that disambiguates tracks well into the past
    corecore