7,298 research outputs found
Low resolution lidar-based multi object tracking for driving applications
The final publication is available at link.springer.comVehicle detection and tracking in real scenarios are key com- ponents to develop assisted and autonomous driving systems. Lidar sen- sors are specially suitable for this task, as they bring robustness to harsh weather conditions while providing accurate spatial information. How- ever, the resolution provided by point cloud data is very scarce in com- parison to camera images. In this work we explore the possibilities of Deep Learning (DL) methodologies applied to low resolution 3D lidar sensors such as the Velodyne VLP-16 (PUCK), in the context of vehicle detection and tracking. For this purpose we developed a lidar-based sys- tem that uses a Convolutional Neural Network (CNN), to perform point- wise vehicle detection using PUCK data, and Multi-Hypothesis Extended Kalman Filters (MH-EKF), to estimate the actual position and veloci- ties of the detected vehicles. Comparative studies between the proposed lower resolution (VLP-16) tracking system and a high-end system, using Velodyne HDL-64, were carried out on the Kitti Tracking Benchmark dataset. Moreover, to analyze the influence of the CNN-based vehicle detection approach, comparisons were also performed with respect to the geometric-only detector. The results demonstrate that the proposed low resolution Deep Learning architecture is able to successfully accom- plish the vehicle detection task, outperforming the geometric baseline approach. Moreover, it has been observed that our system achieves a similar tracking performance to the high-end HDL-64 sensor at close range. On the other hand, at long range, detection is limited to half the distance of the higher-end sensor.Peer ReviewedPostprint (author's final draft
People tracking by cooperative fusion of RADAR and camera sensors
Accurate 3D tracking of objects from monocular camera poses challenges due to the loss of depth during projection. Although ranging by RADAR has proven effective in highway environments, people tracking remains beyond the capability of single sensor systems. In this paper, we propose a cooperative RADAR-camera fusion method for people tracking on the ground plane. Using average person height, joint detection likelihood is calculated by back-projecting detections from the camera onto the RADAR Range-Azimuth data. Peaks in the joint likelihood, representing candidate targets, are fed into a Particle Filter tracker. Depending on the association outcome, particles are updated using the associated detections (Tracking by Detection), or by sampling the raw likelihood itself (Tracking Before Detection). Utilizing the raw likelihood data has the advantage that lost targets are continuously tracked even if the camera or RADAR signal is below the detection threshold. We show that in single target, uncluttered environments, the proposed method entirely outperforms camera-only tracking. Experiments in a real-world urban environment also confirm that the cooperative fusion tracker produces significantly better estimates, even in difficult and ambiguous situations
Extended Object Tracking: Introduction, Overview and Applications
This article provides an elaborate overview of current research in extended
object tracking. We provide a clear definition of the extended object tracking
problem and discuss its delimitation to other types of object tracking. Next,
different aspects of extended object modelling are extensively discussed.
Subsequently, we give a tutorial introduction to two basic and well used
extended object tracking approaches - the random matrix approach and the Kalman
filter-based approach for star-convex shapes. The next part treats the tracking
of multiple extended objects and elaborates how the large number of feasible
association hypotheses can be tackled using both Random Finite Set (RFS) and
Non-RFS multi-object trackers. The article concludes with a summary of current
applications, where four example applications involving camera, X-band radar,
light detection and ranging (lidar), red-green-blue-depth (RGB-D) sensors are
highlighted.Comment: 30 pages, 19 figure
Training a Fast Object Detector for LiDAR Range Images Using Labeled Data from Sensors with Higher Resolution
In this paper, we describe a strategy for training neural networks for object
detection in range images obtained from one type of LiDAR sensor using labeled
data from a different type of LiDAR sensor. Additionally, an efficient model
for object detection in range images for use in self-driving cars is presented.
Currently, the highest performing algorithms for object detection from LiDAR
measurements are based on neural networks. Training these networks using
supervised learning requires large annotated datasets. Therefore, most research
using neural networks for object detection from LiDAR point clouds is conducted
on a very small number of publicly available datasets. Consequently, only a
small number of sensor types are used. We use an existing annotated dataset to
train a neural network that can be used with a LiDAR sensor that has a lower
resolution than the one used for recording the annotated dataset. This is done
by simulating data from the lower resolution LiDAR sensor based on the higher
resolution dataset. Furthermore, improvements to models that use LiDAR range
images for object detection are presented. The results are validated using both
simulated sensor data and data from an actual lower resolution sensor mounted
to a research vehicle. It is shown that the model can detect objects from
360{\deg} range images in real time
- …