882 research outputs found
Autonomous Vehicles: Open-Source Technologies, Considerations, and Development
Autonomous vehicles are the culmination of advances in many areas such as
sensor technologies, artificial intelligence (AI), networking, and more. This
paper will introduce the reader to the technologies that build autonomous
vehicles. It will focus on open-source tools and libraries for autonomous
vehicle development, making it cheaper and easier for developers and
researchers to participate in the field. The topics covered are as follows.
First, we will discuss the sensors used in autonomous vehicles and summarize
their performance in different environments, costs, and unique features. Then
we will cover Simultaneous Localization and Mapping (SLAM) and algorithms for
each modality. Third, we will review popular open-source driving simulators, a
cost-effective way to train machine learning models and test vehicle software
performance. We will then highlight embedded operating systems and the security
and development considerations when choosing one. After that, we will discuss
Vehicle-to-Vehicle (V2V) and Internet-of-Vehicle (IoV) communication, which are
areas that fuse networking technologies with autonomous vehicles to extend
their functionality. We will then review the five levels of vehicle automation,
commercial and open-source Advanced Driving Assistance Systems, and their
features. Finally, we will touch on the major manufacturing and software
companies involved in the field, their investments, and their partnerships.
These topics will give the reader an understanding of the industry, its
technologies, active research, and the tools available for developers to build
autonomous vehicles.Comment: 13 pages, 7 figure
Increasing the Efficiency of Policy Learning for Autonomous Vehicles by Multi-Task Representation Learning
Driving in a dynamic, multi-agent, and complex urban environment is a
difficult task requiring a complex decision-making policy. The learning of such
a policy requires a state representation that can encode the entire
environment. Mid-level representations that encode a vehicle's environment as
images have become a popular choice. Still, they are quite high-dimensional,
limiting their use in data-hungry approaches such as reinforcement learning. In
this article, we propose to learn a low-dimensional and rich latent
representation of the environment by leveraging the knowledge of relevant
semantic factors. To do this, we train an encoder-decoder deep neural network
to predict multiple application-relevant factors such as the trajectories of
other agents and the ego car. Furthermore, we propose a hazard signal based on
other vehicles' future trajectories and the planned route which is used in
conjunction with the learned latent representation as input to a down-stream
policy. We demonstrate that using the multi-head encoder-decoder neural network
results in a more informative representation than a standard single-head model.
In particular, the proposed representation learning and the hazard signal help
reinforcement learning to learn faster, with increased performance and less
data than baseline methods
Scene Detection Classification and Tracking for Self-Driven Vehicle
A number of traffic-related issues, including crashes, jams, and pollution, could be resolved by self-driving vehicles (SDVs). Several challenges still need to be overcome, particularly in the areas of precise environmental perception, observed detection, and its classification, to allow the safe navigation of autonomous vehicles (AVs) in crowded urban situations. This article offers a comprehensive examination of the application of deep learning techniques in self-driving cars for scene perception and observed detection. The theoretical foundations of self-driving cars are examined in depth in this research using a deep learning methodology. It explores the current applications of deep learning in this area and provides critical evaluations of their efficacy. This essay begins with an introduction to the ideas of computer vision, deep learning, and self-driving automobiles. It also gives a brief review of artificial general intelligence, highlighting its applicability to the subject at hand. The paper then concentrates on categorising current, robust deep learning libraries and considers their critical contribution to the development of deep learning techniques. The dataset used as label for scene detection for self-driven vehicle. The discussion of several strategies that explicitly handle picture perception issues faced in real-time driving scenarios takes up a sizeable amount of the work. These methods include methods for item detection, recognition, and scene comprehension. In this study, self-driving automobile implementations and tests are critically assessed
End-to-end Autonomous Driving: Challenges and Frontiers
The autonomous driving community has witnessed a rapid growth in approaches
that embrace an end-to-end algorithm framework, utilizing raw sensor input to
generate vehicle motion plans, instead of concentrating on individual tasks
such as detection and motion prediction. End-to-end systems, in comparison to
modular pipelines, benefit from joint feature optimization for perception and
planning. This field has flourished due to the availability of large-scale
datasets, closed-loop evaluation, and the increasing need for autonomous
driving algorithms to perform effectively in challenging scenarios. In this
survey, we provide a comprehensive analysis of more than 250 papers, covering
the motivation, roadmap, methodology, challenges, and future trends in
end-to-end autonomous driving. We delve into several critical challenges,
including multi-modality, interpretability, causal confusion, robustness, and
world models, amongst others. Additionally, we discuss current advancements in
foundation models and visual pre-training, as well as how to incorporate these
techniques within the end-to-end driving framework. To facilitate future
research, we maintain an active repository that contains up-to-date links to
relevant literature and open-source projects at
https://github.com/OpenDriveLab/End-to-end-Autonomous-Driving
Safety of autonomous vehicles: A survey on Model-based vs. AI-based approaches
The growing advancements in Autonomous Vehicles (AVs) have emphasized the
critical need to prioritize the absolute safety of AV maneuvers, especially in
dynamic and unpredictable environments or situations. This objective becomes
even more challenging due to the uniqueness of every traffic
situation/condition. To cope with all these very constrained and complex
configurations, AVs must have appropriate control architectures with reliable
and real-time Risk Assessment and Management Strategies (RAMS). These targeted
RAMS must lead to reduce drastically the navigation risks. However, the lack of
safety guarantees proves, which is one of the key challenges to be addressed,
limit drastically the ambition to introduce more broadly AVs on our roads and
restrict the use of AVs to very limited use cases. Therefore, the focus and the
ambition of this paper is to survey research on autonomous vehicles while
focusing on the important topic of safety guarantee of AVs. For this purpose,
it is proposed to review research on relevant methods and concepts defining an
overall control architecture for AVs, with an emphasis on the safety assessment
and decision-making systems composing these architectures. Moreover, it is
intended through this reviewing process to highlight researches that use either
model-based methods or AI-based approaches. This is performed while emphasizing
the strengths and weaknesses of each methodology and investigating the research
that proposes a comprehensive multi-modal design that combines model-based and
AI approaches. This paper ends with discussions on the methods used to
guarantee the safety of AVs namely: safety verification techniques and the
standardization/generalization of safety frameworks
A computer vision system for detecting and analysing critical events in cities
Whether for commuting or leisure, cycling is a growing transport mode in many cities worldwide. However, it is still perceived as a dangerous activity. Although serious incidents related to cycling leading to major injuries are rare, the fear of getting hit or falling hinders the expansion of cycling as a major transport mode. Indeed, it has been shown that focusing on serious injuries only touches the tip of the iceberg. Near miss data can provide much more information about potential problems and how to avoid risky situations that may lead to serious incidents. Unfortunately, there is a gap in the knowledge in identifying and analysing near misses. This hinders drawing statistically significant conclusions to provide measures for the built-environment that ensure a safer environment for people on bikes. In this research, we develop a method to detect and analyse near misses and their risk factors using artificial intelligence. This is accomplished by analysing video streams linked to near miss incidents within a novel framework relying on deep learning and computer vision. This framework automatically detects near misses and extracts their risk factors from video streams before analysing their statistical significance. It also provides practical solutions implemented in a camera with embedded AI (URBAN-i Box) and a cloud-based service (URBAN-i Cloud) to tackle the stated issue in the real-world settings for use by researchers, policy-makers, or citizens. The research aims to provide human-centred evidence that may enable policy-makers and planners to provide a safer built environment for cycling in London, or elsewhere. More broadly, this research aims to contribute to the scientific literature with the theoretical and empirical foundations of a computer vision system that can be utilised for detecting and analysing other critical events in a complex environment. Such a system can be applied to a wide range of events, such as traffic incidents, crime or overcrowding
A comprehensive survey on cooperative intersection management for heterogeneous connected vehicles
Nowadays, with the advancement of technology, world is trending toward high mobility and dynamics. In this context, intersection management (IM) as one of the most crucial elements of the transportation sector demands high attention. Today, road entities including infrastructures, vulnerable road users (VRUs) such as motorcycles, moped, scooters, pedestrians, bicycles, and other types of vehicles such as trucks, buses, cars, emergency vehicles, and railway vehicles like trains or trams are able to communicate cooperatively using vehicle-to-everything (V2X) communications and provide traffic safety, efficiency, infotainment and ecological improvements. In this paper, we take into account different types of intersections in terms of signalized, semi-autonomous (hybrid) and autonomous intersections and conduct a comprehensive survey on various intersection management methods for heterogeneous connected vehicles (CVs). We consider heterogeneous classes of vehicles such as road and rail vehicles as well as VRUs including bicycles, scooters and motorcycles. All kinds of intersection goals, modeling, coordination architectures, scheduling policies are thoroughly discussed. Signalized and semi-autonomous intersections are assessed with respect to these parameters. We especially focus on autonomous intersection management (AIM) and categorize this section based on four major goals involving safety, efficiency, infotainment and environment. Each intersection goal provides an in-depth investigation on the corresponding literature from the aforementioned perspectives. Moreover, robustness and resiliency of IM are explored from diverse points of view encompassing sensors, information management and sharing, planning universal scheme, heterogeneous collaboration, vehicle classification, quality measurement, external factors, intersection types, localization faults, communication anomalies and channel optimization, synchronization, vehicle dynamics and model mismatch, model uncertainties, recovery, security and privacy
- …