23,243 research outputs found
Extended Object Tracking: Introduction, Overview and Applications
This article provides an elaborate overview of current research in extended
object tracking. We provide a clear definition of the extended object tracking
problem and discuss its delimitation to other types of object tracking. Next,
different aspects of extended object modelling are extensively discussed.
Subsequently, we give a tutorial introduction to two basic and well used
extended object tracking approaches - the random matrix approach and the Kalman
filter-based approach for star-convex shapes. The next part treats the tracking
of multiple extended objects and elaborates how the large number of feasible
association hypotheses can be tackled using both Random Finite Set (RFS) and
Non-RFS multi-object trackers. The article concludes with a summary of current
applications, where four example applications involving camera, X-band radar,
light detection and ranging (lidar), red-green-blue-depth (RGB-D) sensors are
highlighted.Comment: 30 pages, 19 figure
Multi-Object Tracking with Interacting Vehicles and Road Map Information
In many applications, tracking of multiple objects is crucial for a
perception of the current environment. Most of the present multi-object
tracking algorithms assume that objects move independently regarding other
dynamic objects as well as the static environment. Since in many traffic
situations objects interact with each other and in addition there are
restrictions due to drivable areas, the assumption of an independent object
motion is not fulfilled. This paper proposes an approach adapting a
multi-object tracking system to model interaction between vehicles, and the
current road geometry. Therefore, the prediction step of a Labeled
Multi-Bernoulli filter is extended to facilitate modeling interaction between
objects using the Intelligent Driver Model. Furthermore, to consider road map
information, an approximation of a highly precise road map is used. The results
show that in scenarios where the assumption of a standard motion model is
violated, the tracking system adapted with the proposed method achieves higher
accuracy and robustness in its track estimations
Localization from semantic observations via the matrix permanent
Most approaches to robot localization rely on low-level geometric features such as points, lines, and planes. In this paper, we use object recognition to obtain semantic information from the robot’s sensors and consider the task of localizing the robot within a prior map of landmarks, which are annotated with semantic labels. As object recognition algorithms miss detections and produce false alarms, correct data association between the detections and the landmarks on the map is central to the semantic localization problem. Instead of the traditional vector-based representation, we propose a sensor model, which encodes the semantic observations via random finite sets and enables a unified treatment of missed detections, false alarms, and data association. Our second contribution is to reduce the problem of computing the likelihood of a set-valued observation to the problem of computing a matrix permanent. It is this crucial transformation that allows us to solve the semantic localization problem with a polynomial-time approximation to the set-based Bayes filter. Finally, we address the active semantic localization problem, in which the observer’s trajectory is planned in order to improve the accuracy and efficiency of the localization process. The performance of our approach is demonstrated in simulation and in real environments using deformable-part-model-based object detectors. Robust global localization from semantic observations is demonstrated for a mobile robot, for the Project Tango phone, and on the KITTI visual odometry dataset. Comparisons are made with the traditional lidar-based geometric Monte Carlo localization
Multisensor Poisson Multi-Bernoulli Filter for Joint Target-Sensor State Tracking
In a typical multitarget tracking (MTT) scenario, the sensor state is either
assumed known, or tracking is performed in the sensor's (relative) coordinate
frame. This assumption does not hold when the sensor, e.g., an automotive
radar, is mounted on a vehicle, and the target state should be represented in a
global (absolute) coordinate frame. Then it is important to consider the
uncertain location of the vehicle on which the sensor is mounted for MTT. In
this paper, we present a multisensor low complexity Poisson multi-Bernoulli MTT
filter, which jointly tracks the uncertain vehicle state and target states.
Measurements collected by different sensors mounted on multiple vehicles with
varying location uncertainty are incorporated sequentially based on the arrival
of new sensor measurements. In doing so, targets observed from a sensor mounted
on a well-localized vehicle reduce the state uncertainty of other poorly
localized vehicles, provided that a common non-empty subset of targets is
observed. A low complexity filter is obtained by approximations of the joint
sensor-feature state density minimizing the Kullback-Leibler divergence (KLD).
Results from synthetic as well as experimental measurement data, collected in a
vehicle driving scenario, demonstrate the performance benefits of joint
vehicle-target state tracking.Comment: 13 pages, 7 figure
Multiple Target, Multiple Type Filtering in the RFS Framework
A Multiple Target, Multiple Type Filtering (MTMTF) algorithm is developed
using Random Finite Set (RFS) theory. First, we extend the standard Probability
Hypothesis Density (PHD) filter for multiple types of targets, each with
distinct detection properties, to develop a multiple target, multiple type
filtering, N-type PHD filter, where , for handling confusions among
target types. In this approach, we assume that there will be confusions between
detections, i.e. clutter arises not just from background false positives, but
also from target confusions. Then, under the assumptions of Gaussianity and
linearity, we extend the Gaussian mixture (GM) implementation of the standard
PHD filter for the proposed N-type PHD filter termed the N-type GM-PHD filter.
Furthermore, we analyze the results from simulations to track sixteen targets
of four different types using a four-type (quad) GM-PHD filter as a typical
example and compare it with four independent GM-PHD filters using the Optimal
Subpattern Assignment (OSPA) metric. This shows the improved performance of our
strategy that accounts for target confusions by efficiently discriminating
them
Complexer-YOLO: Real-Time 3D Object Detection and Tracking on Semantic Point Clouds
Accurate detection of 3D objects is a fundamental problem in computer vision
and has an enormous impact on autonomous cars, augmented/virtual reality and
many applications in robotics. In this work we present a novel fusion of neural
network based state-of-the-art 3D detector and visual semantic segmentation in
the context of autonomous driving. Additionally, we introduce
Scale-Rotation-Translation score (SRTs), a fast and highly parameterizable
evaluation metric for comparison of object detections, which speeds up our
inference time up to 20\% and halves training time. On top, we apply
state-of-the-art online multi target feature tracking on the object
measurements to further increase accuracy and robustness utilizing temporal
information. Our experiments on KITTI show that we achieve same results as
state-of-the-art in all related categories, while maintaining the performance
and accuracy trade-off and still run in real-time. Furthermore, our model is
the first one that fuses visual semantic with 3D object detection
- …