34 research outputs found
Real-time Dynamic Object Detection for Autonomous Driving using Prior 3D-Maps
International audienceLidar has become an essential sensor for autonomous driving as it provides reliable depth estimation. Lidar is also the primary sensor used in building 3D maps which can be used even in the case of low-cost systems which do not use Lidar. Computation on Lidar point clouds is intensive as it requires processing of millions of points per second. Additionally there are many subsequent tasks such as clustering, detection, tracking and classification which makes real-time execution challenging. In this paper, we discuss real-time dynamic object detection algorithms which leverages previously mapped Lidar point clouds to reduce processing. The prior 3D maps provide a static background model and we formulate dynamic object detection as a background subtraction problem. Computation and modeling challenges in the mapping and online execution pipeline are described. We propose a rejection cascade architecture to subtract road regions and other 3D regions separately. We implemented an initial version of our proposed algorithm and evaluated the accuracy on CARLA simulator
Applications of the method of invariants in computer graphics
In order to evoke a discussion about possible future research themes for the Eindhoven Department of Mathematics and Computing Science, a small overview about the scientific background of the group is presented. We sketch the global framework of the method of invariants in deriving algorithms, and we apply this method to two well-known basic algorithms in computer graphics
An Object Oriented Approach Towards Simulating Physical Systems with Fluids and Rigid Bodies Based on the Physolator Simulation Framework
Generation of Laser-Quality 2D Navigation Maps from RGB-D Sensors
The use of RGB-D cameras has become an affordable solution for robot mapping and navigation in contrast to expensive 2D laser range finders. Although these sensors provide richer information about the 3D environment, most successful mapping and navigation techniques for mobile robots have been developed considering a 2D planar environment. In this paper, we present our system for 2D navigation using RGB-D sensors. The key feature of our system is the extraction of 2D laser scans out of the 3D point cloud provided by the camera that can be later used by common mapping or localization approaches. Along with the real experiments we raise the question “how far can we go with the use of RGB-D sensors for 2D navigation?” and we analyze performance and limitations of the system compared to accurate, yet expensive, laser-based systems