1,760 research outputs found
FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation
FlightGoggles is a photorealistic sensor simulator for perception-driven
robotic vehicles. The key contributions of FlightGoggles are twofold. First,
FlightGoggles provides photorealistic exteroceptive sensor simulation using
graphics assets generated with photogrammetry. Second, it provides the ability
to combine (i) synthetic exteroceptive measurements generated in silico in real
time and (ii) vehicle dynamics and proprioceptive measurements generated in
motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of
simulating a virtual-reality environment around autonomous vehicle(s). While a
vehicle is in flight in the FlightGoggles virtual reality environment,
exteroceptive sensors are rendered synthetically in real time while all complex
extrinsic dynamics are generated organically through the natural interactions
of the vehicle. The FlightGoggles framework allows for researchers to
accelerate development by circumventing the need to estimate complex and
hard-to-model interactions such as aerodynamics, motor mechanics, battery
electrochemistry, and behavior of other agents. The ability to perform
vehicle-in-the-loop experiments with photorealistic exteroceptive sensor
simulation facilitates novel research directions involving, e.g., fast and
agile autonomous flight in obstacle-rich environments, safe human interaction,
and flexible sensor selection. FlightGoggles has been utilized as the main test
for selecting nine teams that will advance in the AlphaPilot autonomous drone
racing challenge. We survey approaches and results from the top AlphaPilot
teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be
found at https://flightgoggles.mit.edu. Revision includes description of new
FlightGoggles features, such as a photogrammetric model of the MIT Stata
Center, new rendering settings, and a Python AP
DeepPicar: A Low-cost Deep Neural Network-based Autonomous Car
We present DeepPicar, a low-cost deep neural network based autonomous car
platform. DeepPicar is a small scale replication of a real self-driving car
called DAVE-2 by NVIDIA. DAVE-2 uses a deep convolutional neural network (CNN),
which takes images from a front-facing camera as input and produces car
steering angles as output. DeepPicar uses the same network architecture---9
layers, 27 million connections and 250K parameters---and can drive itself in
real-time using a web camera and a Raspberry Pi 3 quad-core platform. Using
DeepPicar, we analyze the Pi 3's computing capabilities to support end-to-end
deep learning based real-time control of autonomous vehicles. We also
systematically compare other contemporary embedded computing platforms using
the DeepPicar's CNN-based real-time control workload. We find that all tested
platforms, including the Pi 3, are capable of supporting the CNN-based
real-time control, from 20 Hz up to 100 Hz, depending on hardware platform.
However, we find that shared resource contention remains an important issue
that must be considered in applying CNN models on shared memory based embedded
computing platforms; we observe up to 11.6X execution time increase in the CNN
based control loop due to shared resource contention. To protect the CNN
workload, we also evaluate state-of-the-art cache partitioning and memory
bandwidth throttling techniques on the Pi 3. We find that cache partitioning is
ineffective, while memory bandwidth throttling is an effective solution.Comment: To be published as a conference paper at RTCSA 201
Study of the implementation of an autonomous driving system
Aquest treball pretén ser una guia introductòria al món de la intel·ligència artificial, aplicada més
concretament al món de la conducció autònoma. L’aplicació final en la que es vol implementar tot el
coneixement desenvolupat al llarg d’aquest treball no és genèrica, entenent com a tal un cotxe
autònom que podria circular pel carrer. Per contra té una finalitat més concreta i menys complexa a
nivell de seguretat: l’aplicació a un cotxe de competició de la Formula Student.
Al llarg del document s’explica tant la teoria necessà ria per aventurar-se en aquest món com també
totes les eines necessà ries per dur a terme l’entrenament d’una intel·ligència artificial capaç
d’aprendre a conduir per ella mateixa. Aixà mateix, també es descriu tot el procés realitzat per trobar
el model més òptim amb les eines utilitzades i comentaris per aprendre a interpretar els resultats.
Cal destacar que la realització d’aquest treball és merament una introducció a aquest món i que, per
bé que els resultats obtinguts són esperançadors i possiblement serviran de base per a futurs
desenvolupament sobre el tema, no poden ser aplicats directament a l’aplicació final per falta de
complexitat i diversitat de casos en l’entrenament del model
F1/10: An Open-Source Autonomous Cyber-Physical Platform
In 2005 DARPA labeled the realization of viable autonomous vehicles (AVs) a
grand challenge; a short time later the idea became a moonshot that could
change the automotive industry. Today, the question of safety stands between
reality and solved. Given the right platform the CPS community is poised to
offer unique insights. However, testing the limits of safety and performance on
real vehicles is costly and hazardous. The use of such vehicles is also outside
the reach of most researchers and students. In this paper, we present F1/10: an
open-source, affordable, and high-performance 1/10 scale autonomous vehicle
testbed. The F1/10 testbed carries a full suite of sensors, perception,
planning, control, and networking software stacks that are similar to full
scale solutions. We demonstrate key examples of the research enabled by the
F1/10 testbed, and how the platform can be used to augment research and
education in autonomous systems, making autonomy more accessible
Vector extensions in COTS processors to increase guaranteed performance in real-time systems
The need for increased application performance in high-integrity systems like those in avionics is on the rise as software continues to implement more complex functionalities. The prevalent computing solution for future high-integrity embedded products are multi-processors systems-on-chip (MPSoC) processors. MPSoCs include CPU multicores that enable improving performance via thread-level parallelism. MPSoCs also include generic accelerators (GPUs) and application-specific accelerators. However, the data processing approach (DPA) required to exploit each of these underlying parallel hardware blocks carries several open challenges to enable the safe deployment in high-integrity domains. The main challenges include the qualification of its associated runtime system and the difficulties in analyzing programs deploying the DPA with out-of-the-box timing analysis and code coverage tools. In this work, we perform a thorough analysis of vector extensions (VExt) in current COTS processors for high-integrity systems. We show that VExt prevent many of the challenges arising with parallel programming models and GPUs. Unlike other DPAs, VExt require no runtime support, prevent by design race conditions that might arise with parallel programming models, and have minimum impact on the software ecosystem enabling the use of existing code coverage and timing analysis tools. We develop vectorized versions of neural network kernels and show that the NVIDIA Xavier VExt provide a reasonable increase in guaranteed application performance of up to 2.7x. Our analysis contends that VExt are the DPA approach with arguably the fastest path for adoption in high-integrity systems.This work has received funding from the the European Research Council (ERC) grant agreement No. 772773 (SuPerCom) and the Spanish Ministry of Science and Innovation (AEI/10.13039/501100011033) under grants PID2019-107255GB-C21 and IJC2020-045931-I.Peer ReviewedPostprint (author's final draft
GPU implementation of the Frenet Path Planner for embedded autonomous systems: A case study in the F1tenth scenario
Autonomous vehicles are increasingly utilized in safety-critical and time-sensitive settings like urban environments and competitive racing. Planning maneuvers ahead is pivotal in these scenarios, where the onboard compute platform determines the vehicle's future actions. This paper introduces an optimized implementation of the Frenet Path Planner, a renowned path planning algorithm, accelerated through GPU processing. Unlike existing methods, our approach expedites the entire algorithm, encompassing path generation and collision avoidance. We gauge the execution time of our implementation, showcasing significant enhancements over the CPU baseline (up to 22x of speedup). Furthermore, we assess the influence of different precision types (double, float, half) on trajectory accuracy, probing the balance between completion speed and computational precision. Moreover, we analyzed the impact on the execution time caused by the use of Nvidia Unified Memory and by the interference caused by other processes running on the same system. We also evaluate our implementation using the F1tenth simulator and in a real race scenario. The results position our implementation as a strong candidate for the new state-of-the-art implementation for the Frenet Path Planner algorithm
Teaching Autonomous Systems Hands-On: Leveraging Modular Small-Scale Hardware in the Robotics Classroom
Although robotics courses are well established in higher education, the
courses often focus on theory and sometimes lack the systematic coverage of the
techniques involved in developing, deploying, and applying software to real
hardware. Additionally, most hardware platforms for robotics teaching are
low-level toys aimed at younger students at middle-school levels. To address
this gap, an autonomous vehicle hardware platform, called F1TENTH, is developed
for teaching autonomous systems hands-on. This article describes the teaching
modules and software stack for teaching at various educational levels with the
theme of "racing" and competitions that replace exams. The F1TENTH vehicles
offer a modular hardware platform and its related software for teaching the
fundamentals of autonomous driving algorithms. From basic reactive methods to
advanced planning algorithms, the teaching modules enhance students'
computational thinking through autonomous driving with the F1TENTH vehicle. The
F1TENTH car fills the gap between research platforms and low-end toy cars and
offers hands-on experience in learning the topics in autonomous systems. Four
universities have adopted the teaching modules for their semester-long
undergraduate and graduate courses for multiple years. Student feedback is used
to analyze the effectiveness of the F1TENTH platform. More than 80% of the
students strongly agree that the hardware platform and modules greatly motivate
their learning, and more than 70% of the students strongly agree that the
hardware-enhanced their understanding of the subjects. The survey results show
that more than 80% of the students strongly agree that the competitions
motivate them for the course.Comment: 15 pages, 12 figures, 3 table
- …