23,445 research outputs found
Modeling Camera Effects to Improve Visual Learning from Synthetic Data
Recent work has focused on generating synthetic imagery to increase the size
and variability of training data for learning visual tasks in urban scenes.
This includes increasing the occurrence of occlusions or varying environmental
and weather effects. However, few have addressed modeling variation in the
sensor domain. Sensor effects can degrade real images, limiting
generalizability of network performance on visual tasks trained on synthetic
data and tested in real environments. This paper proposes an efficient,
automatic, physically-based augmentation pipeline to vary sensor effects
--chromatic aberration, blur, exposure, noise, and color cast-- for synthetic
imagery. In particular, this paper illustrates that augmenting synthetic
training datasets with the proposed pipeline reduces the domain gap between
synthetic and real domains for the task of object detection in urban driving
scenes
Procedural Modeling and Physically Based Rendering for Synthetic Data Generation in Automotive Applications
We present an overview and evaluation of a new, systematic approach for
generation of highly realistic, annotated synthetic data for training of deep
neural networks in computer vision tasks. The main contribution is a procedural
world modeling approach enabling high variability coupled with physically
accurate image synthesis, and is a departure from the hand-modeled virtual
worlds and approximate image synthesis methods used in real-time applications.
The benefits of our approach include flexible, physically accurate and scalable
image synthesis, implicit wide coverage of classes and features, and complete
data introspection for annotations, which all contribute to quality and cost
efficiency. To evaluate our approach and the efficacy of the resulting data, we
use semantic segmentation for autonomous vehicles and robotic navigation as the
main application, and we train multiple deep learning architectures using
synthetic data with and without fine tuning on organic (i.e. real-world) data.
The evaluation shows that our approach improves the neural network's
performance and that even modest implementation efforts produce
state-of-the-art results.Comment: The project web page at
http://vcl.itn.liu.se/publications/2017/TKWU17/ contains a version of the
paper with high-resolution images as well as additional materia
FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation
FlightGoggles is a photorealistic sensor simulator for perception-driven
robotic vehicles. The key contributions of FlightGoggles are twofold. First,
FlightGoggles provides photorealistic exteroceptive sensor simulation using
graphics assets generated with photogrammetry. Second, it provides the ability
to combine (i) synthetic exteroceptive measurements generated in silico in real
time and (ii) vehicle dynamics and proprioceptive measurements generated in
motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of
simulating a virtual-reality environment around autonomous vehicle(s). While a
vehicle is in flight in the FlightGoggles virtual reality environment,
exteroceptive sensors are rendered synthetically in real time while all complex
extrinsic dynamics are generated organically through the natural interactions
of the vehicle. The FlightGoggles framework allows for researchers to
accelerate development by circumventing the need to estimate complex and
hard-to-model interactions such as aerodynamics, motor mechanics, battery
electrochemistry, and behavior of other agents. The ability to perform
vehicle-in-the-loop experiments with photorealistic exteroceptive sensor
simulation facilitates novel research directions involving, e.g., fast and
agile autonomous flight in obstacle-rich environments, safe human interaction,
and flexible sensor selection. FlightGoggles has been utilized as the main test
for selecting nine teams that will advance in the AlphaPilot autonomous drone
racing challenge. We survey approaches and results from the top AlphaPilot
teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be
found at https://flightgoggles.mit.edu. Revision includes description of new
FlightGoggles features, such as a photogrammetric model of the MIT Stata
Center, new rendering settings, and a Python AP
- …