40 research outputs found
Teaching Autonomous Systems Hands-On: Leveraging Modular Small-Scale Hardware in the Robotics Classroom
Although robotics courses are well established in higher education, the
courses often focus on theory and sometimes lack the systematic coverage of the
techniques involved in developing, deploying, and applying software to real
hardware. Additionally, most hardware platforms for robotics teaching are
low-level toys aimed at younger students at middle-school levels. To address
this gap, an autonomous vehicle hardware platform, called F1TENTH, is developed
for teaching autonomous systems hands-on. This article describes the teaching
modules and software stack for teaching at various educational levels with the
theme of "racing" and competitions that replace exams. The F1TENTH vehicles
offer a modular hardware platform and its related software for teaching the
fundamentals of autonomous driving algorithms. From basic reactive methods to
advanced planning algorithms, the teaching modules enhance students'
computational thinking through autonomous driving with the F1TENTH vehicle. The
F1TENTH car fills the gap between research platforms and low-end toy cars and
offers hands-on experience in learning the topics in autonomous systems. Four
universities have adopted the teaching modules for their semester-long
undergraduate and graduate courses for multiple years. Student feedback is used
to analyze the effectiveness of the F1TENTH platform. More than 80% of the
students strongly agree that the hardware platform and modules greatly motivate
their learning, and more than 70% of the students strongly agree that the
hardware-enhanced their understanding of the subjects. The survey results show
that more than 80% of the students strongly agree that the competitions
motivate them for the course.Comment: 15 pages, 12 figures, 3 table
Pre-Deployment Testing of Low Speed, Urban Road Autonomous Driving in a Simulated Environment
Low speed autonomous shuttles emulating SAE Level L4 automated driving using
human driver assisted autonomy have been operating in geo-fenced areas in
several cities in the US and the rest of the world. These autonomous vehicles
(AV) are operated by small to mid-sized technology companies that do not have
the resources of automotive OEMs for carrying out exhaustive, comprehensive
testing of their AV technology solutions before public road deployment. Due to
the low speed of operation and hence not operating on roads containing
highways, the base vehicles of these AV shuttles are not required to go through
rigorous certification tests. The way the driver assisted AV technology is
tested and allowed for public road deployment is continuously evolving but is
not standardized and shows differences between the different states where these
vehicles operate. Currently, AVs and AV shuttles deployed on public roads are
using these deployments for testing and improving their technology. However,
this is not the right approach. Safe and extensive testing in a lab and
controlled test environment including Model-in-the-Loop (MiL),
Hardware-in-the-Loop (HiL) and Autonomous-Vehicle-in-the-Loop (AViL) testing
should be the prerequisite to such public road deployments. This paper presents
three dimensional virtual modeling of an AV shuttle deployment site and
simulation testing in this virtual environment. We have two deployment sites in
Columbus of these AV shuttles through the Department of Transportation funded
Smart City Challenge project named Smart Columbus. The Linden residential area
AV shuttle deployment site of Smart Columbus is used as the specific example
for illustrating the AV testing method proposed in this paper
Autonomous vehicle navigation with deep reinforcement learning
The irruption of Autonomous Vehicles in transportation sector is unstoppable. However, the tran-sition from conventional vehicles to Autonomous Vehicles will not happen from one day to theother, instead, it will be a process of several years in which, gradually, new autonomous/automatedfunctionalities will be equipped to the vehicles and introduced to the customers. These auto-mated/autonomous functionalities are, nowadays, known as ADAS (Advanced Driver AssistanceSystems).The aim of this project is, through the combination of different ADAS functions, make the vehiclenavigate on a highway autonomously, but at the same time, following the traffic rules and regu-lations requirements, and also guaranteeing the safety on the road. In order to accomplish thisobjective, the proposed approach is to implement the Policy Gradient Reinforcement Learningmethod to select the proper function at each moment.The actual regulatory framework of the road safety will be explained in order to understand theADAS functions that the model will combine, as well as to know how these will evolve in the nearfuture.The algorithm will be tested using a five-lane highway simulator, previously selected after a study ofthe state-of-the-art of Autonomous Vehicles’ simulators. The vehicle will be guided by LIDAR datacoming from the sensor installed in the vehicle (sensor equipped in most of the future AutonomousVehicles).Results and performance of the model through experimentation will be presented and evaluatedusing the simulator, as well as the different network morphologie
Exploring the challenges and opportunities of image processing and sensor fusion in autonomous vehicles: A comprehensive review
Autonomous vehicles are at the forefront of future transportation solutions, but their success hinges on reliable perception. This review paper surveys image processing and sensor fusion techniques vital for ensuring vehicle safety and efficiency. The paper focuses on object detection, recognition, tracking, and scene comprehension via computer vision and machine learning methodologies. In addition, the paper explores challenges within the field, such as robustness in adverse weather conditions, the demand for real-time processing, and the integration of complex sensor data. Furthermore, we examine localization techniques specific to autonomous vehicles. The results show that while substantial progress has been made in each subfield, there are persistent limitations. These include a shortage of comprehensive large-scale testing, the absence of diverse and robust datasets, and occasional inaccuracies in certain studies. These issues impede the seamless deployment of this technology in real-world scenarios. This comprehensive literature review contributes to a deeper understanding of the current state and future directions of image processing and sensor fusion in autonomous vehicles, aiding researchers and practitioners in advancing the development of reliable autonomous driving systems
BehAVExplor: Behavior Diversity Guided Testing for Autonomous Driving Systems
Testing Autonomous Driving Systems (ADSs) is a critical task for ensuring the
reliability and safety of autonomous vehicles. Existing methods mainly focus on
searching for safety violations while the diversity of the generated test cases
is ignored, which may generate many redundant test cases and failures. Such
redundant failures can reduce testing performance and increase failure analysis
costs. In this paper, we present a novel behavior-guided fuzzing technique
(BehAVExplor) to explore the different behaviors of the ego vehicle (i.e., the
vehicle controlled by the ADS under test) and detect diverse violations.
Specifically, we design an efficient unsupervised model, called BehaviorMiner,
to characterize the behavior of the ego vehicle. BehaviorMiner extracts the
temporal features from the given scenarios and performs a clustering-based
abstraction to group behaviors with similar features into abstract states. A
new test case will be added to the seed corpus if it triggers new behaviors
(e.g., cover new abstract states). Due to the potential conflict between the
behavior diversity and the general violation feedback, we further propose an
energy mechanism to guide the seed selection and the mutation. The energy of a
seed quantifies how good it is. We evaluated BehAVExplor on Apollo, an
industrial-level ADS, and LGSVL simulation environment. Empirical evaluation
results show that BehAVExplor can effectively find more diverse violations than
the state-of-the-art
Trusted autonomous vehicles: an interactive exhibit
Recent surveys about autonomous vehicles show that the public is concerned about the safety consequences of system or equipment failures and the vehicles' reactions to unexpected situations. We believe that informing about the technology and quality, e.g., safety and reliability, of autonomous vehicles is paramount to improving public expectations, perception and acceptance. In this paper, we report on the design of an interactive exhibit to illustrate (1) basic technologies employed in autonomous vehicles, i.e., sensors and object classification; and (2) basic principles for ensuring their quality, i.e., employing software testing and simulations. We subsequently report on a public engagement event involving this exhibit at the Royal Society Summer Science Exhibition 2019 in the exhibit titled "Trusted Autonomous Vehicles". We describe the process of designing and developing the artefacts used in our exhibit, the theoretical background associated to them, the design of our stand, and the lessons learned. The activities and findings of this study can be used by other educators and researchers interested in promoting trust in autonomous vehicles among the general public
Autonomics: In Search of a Foundation for Next Generation Autonomous Systems
The potential benefits of autonomous systems have been driving intensive
development of such systems, and of supporting tools and methodologies.
However, there are still major issues to be dealt with before such development
becomes commonplace engineering practice, with accepted and trustworthy
deliverables. We argue that a solid, evolving, publicly available,
community-controlled foundation for developing next generation autonomous
systems is a must. We discuss what is needed for such a foundation, identify a
central aspect thereof, namely, decision-making, and focus on three main
challenges: (i) how to specify autonomous system behavior and the associated
decisions in the face of unpredictability of future events and conditions and
the inadequacy of current languages for describing these; (ii) how to carry out
faithful simulation and analysis of system behavior with respect to rich
environments that include humans, physical artifacts, and other systems,; and
(iii) how to engineer systems that combine executable model-driven techniques
and data-driven machine learning techniques. We argue that autonomics, i.e.,
the study of unique challenges presented by next generation autonomous systems,
and research towards resolving them, can introduce substantial contributions
and innovations in system engineering and computer science