11,129 research outputs found

    A LiDAR Point Cloud Generator: from a Virtual World to Autonomous Driving

    Full text link
    3D LiDAR scanners are playing an increasingly important role in autonomous driving as they can generate depth information of the environment. However, creating large 3D LiDAR point cloud datasets with point-level labels requires a significant amount of manual annotation. This jeopardizes the efficient development of supervised deep learning algorithms which are often data-hungry. We present a framework to rapidly create point clouds with accurate point-level labels from a computer game. The framework supports data collection from both auto-driving scenes and user-configured scenes. Point clouds from auto-driving scenes can be used as training data for deep learning algorithms, while point clouds from user-configured scenes can be used to systematically test the vulnerability of a neural network, and use the falsifying examples to make the neural network more robust through retraining. In addition, the scene images can be captured simultaneously in order for sensor fusion tasks, with a method proposed to do automatic calibration between the point clouds and captured scene images. We show a significant improvement in accuracy (+9%) in point cloud segmentation by augmenting the training dataset with the generated synthesized data. Our experiments also show by testing and retraining the network using point clouds from user-configured scenes, the weakness/blind spots of the neural network can be fixed

    Sensormodelle zur Simulation der Umfelderfassung für Systeme des automatisierten Fahrens

    Get PDF
    The use of sensor models allows the simulation of environmental perception in automated driving systems, aiding in development and testing efforts. This work systematically discusses the different types of sensor models and introduces an architecture for statistics based as well as for physically motivated sensor models. Each approach is grounded in real world observations of sensor measurements and is designed for portability and the ease of further extensions.Die Nutzung von Sensormodellen für die Umfelderfassung ebnet den Weg für die simulationsgestützte Entwicklung von Systemen des automatisierten Fahrens. In dieser Arbeit wird eine Systematik für verschiedene Arten von Sensormodellen eingeführt und eine Umsetzung von statistischen sowie von physikalisch motivierten Modellen vorgestellt. Beide Ansätze basieren auf realen Sensormessdaten und zielen auf eine leichte Übertragbarkeit sowie die Möglichkeit der Erweiterung der Modelle für verschiedene Anwendungsbereiche

    Playing for Data: Ground Truth from Computer Games

    Full text link
    Recent progress in computer vision has been driven by high-capacity models trained on large datasets. Unfortunately, creating large datasets with pixel-level labels has been extremely costly due to the amount of human effort required. In this paper, we present an approach to rapidly creating pixel-accurate semantic label maps for images extracted from modern computer games. Although the source code and the internal operation of commercial games are inaccessible, we show that associations between image patches can be reconstructed from the communication between the game and the graphics hardware. This enables rapid propagation of semantic labels within and across images synthesized by the game, with no access to the source code or the content. We validate the presented approach by producing dense pixel-level semantic annotations for 25 thousand images synthesized by a photorealistic open-world computer game. Experiments on semantic segmentation datasets show that using the acquired data to supplement real-world images significantly increases accuracy and that the acquired data enables reducing the amount of hand-labeled real-world data: models trained with game data and just 1/3 of the CamVid training set outperform models trained on the complete CamVid training set.Comment: Accepted to the 14th European Conference on Computer Vision (ECCV 2016
    corecore