2,996 research outputs found
Deep Generative Modeling of LiDAR Data
Building models capable of generating structured output is a key challenge
for AI and robotics. While generative models have been explored on many types
of data, little work has been done on synthesizing lidar scans, which play a
key role in robot mapping and localization. In this work, we show that one can
adapt deep generative models for this task by unravelling lidar scans into a 2D
point map. Our approach can generate high quality samples, while simultaneously
learning a meaningful latent representation of the data. We demonstrate
significant improvements against state-of-the-art point cloud generation
methods. Furthermore, we propose a novel data representation that augments the
2D signal with absolute positional information. We show that this helps
robustness to noisy and imputed input; the learned model can recover the
underlying lidar scan from seemingly uninformative dataComment: Presented at IROS 201
LiDAR Data Synthesis with Denoising Diffusion Probabilistic Models
Generative modeling of 3D LiDAR data is an emerging task with promising
applications for autonomous mobile robots, such as scalable simulation, scene
manipulation, and sparse-to-dense completion of LiDAR point clouds. Existing
approaches have shown the feasibility of image-based LiDAR data generation
using deep generative models while still struggling with the fidelity of
generated data and training instability. In this work, we present R2DM, a novel
generative model for LiDAR data that can generate diverse and high-fidelity 3D
scene point clouds based on the image representation of range and reflectance
intensity. Our method is based on the denoising diffusion probabilistic models
(DDPMs), which have demonstrated impressive results among generative model
frameworks and have been significantly progressing in recent years. To
effectively train DDPMs on the LiDAR domain, we first conduct an in-depth
analysis regarding data representation, training objective, and spatial
inductive bias. Based on our designed model R2DM, we also introduce a flexible
LiDAR completion pipeline using the powerful properties of DDPMs. We
demonstrate that our method outperforms the baselines on the generation task of
KITTI-360 and KITTI-Raw datasets and the upsampling task of KITTI-360 datasets.
Our code and pre-trained weights will be available at
https://github.com/kazuto1011/r2dm
Imitating Driver Behavior with Generative Adversarial Networks
The ability to accurately predict and simulate human driving behavior is
critical for the development of intelligent transportation systems. Traditional
modeling methods have employed simple parametric models and behavioral cloning.
This paper adopts a method for overcoming the problem of cascading errors
inherent in prior approaches, resulting in realistic behavior that is robust to
trajectory perturbations. We extend Generative Adversarial Imitation Learning
to the training of recurrent policies, and we demonstrate that our model
outperforms rule-based controllers and maximum likelihood models in realistic
highway simulations. Our model both reproduces emergent behavior of human
drivers, such as lane change rate, while maintaining realistic control over
long time horizons.Comment: 8 pages, 6 figure
- …