6,170 research outputs found

    People tracking by cooperative fusion of RADAR and camera sensors

    Get PDF
    Accurate 3D tracking of objects from monocular camera poses challenges due to the loss of depth during projection. Although ranging by RADAR has proven effective in highway environments, people tracking remains beyond the capability of single sensor systems. In this paper, we propose a cooperative RADAR-camera fusion method for people tracking on the ground plane. Using average person height, joint detection likelihood is calculated by back-projecting detections from the camera onto the RADAR Range-Azimuth data. Peaks in the joint likelihood, representing candidate targets, are fed into a Particle Filter tracker. Depending on the association outcome, particles are updated using the associated detections (Tracking by Detection), or by sampling the raw likelihood itself (Tracking Before Detection). Utilizing the raw likelihood data has the advantage that lost targets are continuously tracked even if the camera or RADAR signal is below the detection threshold. We show that in single target, uncluttered environments, the proposed method entirely outperforms camera-only tracking. Experiments in a real-world urban environment also confirm that the cooperative fusion tracker produces significantly better estimates, even in difficult and ambiguous situations

    Towards a Principled Integration of Multi-Camera Re-Identification and Tracking through Optimal Bayes Filters

    Full text link
    With the rise of end-to-end learning through deep learning, person detectors and re-identification (ReID) models have recently become very strong. Multi-camera multi-target (MCMT) tracking has not fully gone through this transformation yet. We intend to take another step in this direction by presenting a theoretically principled way of integrating ReID with tracking formulated as an optimal Bayes filter. This conveniently side-steps the need for data-association and opens up a direct path from full images to the core of the tracker. While the results are still sub-par, we believe that this new, tight integration opens many interesting research opportunities and leads the way towards full end-to-end tracking from raw pixels.Comment: First two authors have equal contribution. This is initial work into a new direction, not a benchmark-beating method. v2 only adds acknowledgements and fixes a typo in e-mai

    Multisensor Poisson Multi-Bernoulli Filter for Joint Target-Sensor State Tracking

    Full text link
    In a typical multitarget tracking (MTT) scenario, the sensor state is either assumed known, or tracking is performed in the sensor's (relative) coordinate frame. This assumption does not hold when the sensor, e.g., an automotive radar, is mounted on a vehicle, and the target state should be represented in a global (absolute) coordinate frame. Then it is important to consider the uncertain location of the vehicle on which the sensor is mounted for MTT. In this paper, we present a multisensor low complexity Poisson multi-Bernoulli MTT filter, which jointly tracks the uncertain vehicle state and target states. Measurements collected by different sensors mounted on multiple vehicles with varying location uncertainty are incorporated sequentially based on the arrival of new sensor measurements. In doing so, targets observed from a sensor mounted on a well-localized vehicle reduce the state uncertainty of other poorly localized vehicles, provided that a common non-empty subset of targets is observed. A low complexity filter is obtained by approximations of the joint sensor-feature state density minimizing the Kullback-Leibler divergence (KLD). Results from synthetic as well as experimental measurement data, collected in a vehicle driving scenario, demonstrate the performance benefits of joint vehicle-target state tracking.Comment: 13 pages, 7 figure

    Bayesian fusion of hidden Markov models for understanding bimanual movements

    Get PDF
    Understanding hand and body gestures is a part of a wide spectrum of current research in computer vision and human-computer interaction. A part of this can be the recognition of movements in which the two hands move simultaneously to do something or imply a meaning. We present a Bayesian network for fusing hidden Markov models in order to recognise a bimanual movement. A bimanual movement is tracked and segmented by a tracking algorithm. Hidden Markov models are assigned to the segments in order to learn and recognize the partial movement within each segment. A Bayesian network fuses the HMMs in order to perceive the movement of the two hands as a single entity

    Multi-view object tracking using sequential belief propagation

    Full text link
    peer reviewedMultiple cameras and collaboration between them make possible the integration of information available from multiple views and reduce the uncertainty due to occlusions. This paper presents a novel method for integrating and tracking multi-view observations using bidirectional belief propagation. The method is based on a fully connected graphical model where target states at different views are represented as different but correlated random variables, and image observations at a given view are only associated with the target states at the same view. The tracking processes at different views collaborate with each other by exchanging information using a message passing scheme, which largely avoids propagating wrong information. An efficient sequential belief propagation algorithm is adopted to perform the collaboration and to infer the multi-view target states. We demonstrate the effectiveness of our method on video-surveillance sequences.TRICTRA
    • 

    corecore