48,348 research outputs found
AILiveSim : An Extensible Virtual Environment for Training Autonomous Vehicles
Virtualization technologies have become common- place both in software development as well as engineering in a more general sense. Using virtualization offers other benefits than simulation and testing as a virtual environment can often be more liberally configured than the corresponding physical envi- ronment. This, in turn, introduces new possibilities for education and training, including both for humans and artificial intelligence (AI). To this end, we are developing a simulation platform AILiveSim. The platform is built on top of the Unreal Engine game development system, and it is dedicated to training and testing autonomous systems, their sensors and their algorithms in a simulated environment. In this paper, we describe the elements that we have built on top of the engine to realize a Virtual Environment (VE) useful for the design, implementation, application and analysis of autonomous systems. We present the architecture that we have put in place to transform our simulation platform from automotive specific to be domain agnostic and support two new domains of applications: autonomous ships and autonomous mining machines. We describe the important specificity of each domain in regard to simulation. In addition, we also report the challenges encountered when simulating those applications, and the decisions taken to overcome these challenges.Peer reviewe
A Fast Integrated Planning and Control Framework for Autonomous Driving via Imitation Learning
For safe and efficient planning and control in autonomous driving, we need a
driving policy which can achieve desirable driving quality in long-term horizon
with guaranteed safety and feasibility. Optimization-based approaches, such as
Model Predictive Control (MPC), can provide such optimal policies, but their
computational complexity is generally unacceptable for real-time
implementation. To address this problem, we propose a fast integrated planning
and control framework that combines learning- and optimization-based approaches
in a two-layer hierarchical structure. The first layer, defined as the "policy
layer", is established by a neural network which learns the long-term optimal
driving policy generated by MPC. The second layer, called the "execution
layer", is a short-term optimization-based controller that tracks the reference
trajecotries given by the "policy layer" with guaranteed short-term safety and
feasibility. Moreover, with efficient and highly-representative features, a
small-size neural network is sufficient in the "policy layer" to handle many
complicated driving scenarios. This renders online imitation learning with
Dataset Aggregation (DAgger) so that the performance of the "policy layer" can
be improved rapidly and continuously online. Several exampled driving scenarios
are demonstrated to verify the effectiveness and efficiency of the proposed
framework
A LiDAR Point Cloud Generator: from a Virtual World to Autonomous Driving
3D LiDAR scanners are playing an increasingly important role in autonomous
driving as they can generate depth information of the environment. However,
creating large 3D LiDAR point cloud datasets with point-level labels requires a
significant amount of manual annotation. This jeopardizes the efficient
development of supervised deep learning algorithms which are often data-hungry.
We present a framework to rapidly create point clouds with accurate point-level
labels from a computer game. The framework supports data collection from both
auto-driving scenes and user-configured scenes. Point clouds from auto-driving
scenes can be used as training data for deep learning algorithms, while point
clouds from user-configured scenes can be used to systematically test the
vulnerability of a neural network, and use the falsifying examples to make the
neural network more robust through retraining. In addition, the scene images
can be captured simultaneously in order for sensor fusion tasks, with a method
proposed to do automatic calibration between the point clouds and captured
scene images. We show a significant improvement in accuracy (+9%) in point
cloud segmentation by augmenting the training dataset with the generated
synthesized data. Our experiments also show by testing and retraining the
network using point clouds from user-configured scenes, the weakness/blind
spots of the neural network can be fixed
Virtual to Real Reinforcement Learning for Autonomous Driving
Reinforcement learning is considered as a promising direction for driving
policy learning. However, training autonomous driving vehicle with
reinforcement learning in real environment involves non-affordable
trial-and-error. It is more desirable to first train in a virtual environment
and then transfer to the real environment. In this paper, we propose a novel
realistic translation network to make model trained in virtual environment be
workable in real world. The proposed network can convert non-realistic virtual
image input into a realistic one with similar scene structure. Given realistic
frames as input, driving policy trained by reinforcement learning can nicely
adapt to real world driving. Experiments show that our proposed virtual to real
(VR) reinforcement learning (RL) works pretty well. To our knowledge, this is
the first successful case of driving policy trained by reinforcement learning
that can adapt to real world driving data
- …