406 research outputs found

    TalkyCars: A Distributed Software Platform for Cooperative Perception among Connected Autonomous Vehicles based on Cellular-V2X Communication

    Get PDF
    Autonomous vehicles are required to operate among highly mixed traffic during their early market-introduction phase, solely relying on local sensory with limited range. Exhaustively comprehending and navigating complex urban environments is potentially not feasible with sufficient reliability using the aforesaid approach. Addressing this challenge, intelligent vehicles can virtually increase their perception range beyond their line of sight by utilizing Vehicle-to-Everything (V2X) communication with surrounding traffic participants to perform cooperative perception. Since existing solutions face a variety of limitations, including lack of comprehensiveness, universality and scalability, this thesis aims to conceptualize, implement and evaluate an end-to-end cooperative perception system using novel techniques. A comprehensive yet extensible modeling approach for dynamic traffic scenes is proposed first, which is based on probabilistic entity-relationship models, accounts for uncertain environments and combines low-level attributes with high-level relational- and semantic knowledge in a generic way. Second, the design of a holistic, distributed software architecture based on edge computing principles is proposed as a foundation for multi-vehicle high-level sensor fusion. In contrast to most existing approaches, the presented solution is designed to rely on Cellular-V2X communication in 5G networks and employs geographically distributed fusion nodes as part of a client-server configuration. A modular proof-of-concept implementation is evaluated in different simulated scenarios to assess the system\u27s performance both qualitatively and quantitatively. Experimental results show that the proposed system scales adequately to meet certain minimum requirements and yields an average improvement in overall perception quality of approximately 27 %

    Towards Vehicle-to-everything Autonomous Driving: A Survey on Collaborative Perception

    Full text link
    Vehicle-to-everything (V2X) autonomous driving opens up a promising direction for developing a new generation of intelligent transportation systems. Collaborative perception (CP) as an essential component to achieve V2X can overcome the inherent limitations of individual perception, including occlusion and long-range perception. In this survey, we provide a comprehensive review of CP methods for V2X scenarios, bringing a profound and in-depth understanding to the community. Specifically, we first introduce the architecture and workflow of typical V2X systems, which affords a broader perspective to understand the entire V2X system and the role of CP within it. Then, we thoroughly summarize and analyze existing V2X perception datasets and CP methods. Particularly, we introduce numerous CP methods from various crucial perspectives, including collaboration stages, roadside sensors placement, latency compensation, performance-bandwidth trade-off, attack/defense, pose alignment, etc. Moreover, we conduct extensive experimental analyses to compare and examine current CP methods, revealing some essential and unexplored insights. Specifically, we analyze the performance changes of different methods under different bandwidths, providing a deep insight into the performance-bandwidth trade-off issue. Also, we examine methods under different LiDAR ranges. To study the model robustness, we further investigate the effects of various simulated real-world noises on the performance of different CP methods, covering communication latency, lossy communication, localization errors, and mixed noises. In addition, we look into the sim-to-real generalization ability of existing CP methods. At last, we thoroughly discuss issues and challenges, highlighting promising directions for future efforts. Our codes for experimental analysis will be public at https://github.com/memberRE/Collaborative-Perception.Comment: 19 page

    Automotive Intelligence Embedded in Electric Connected Autonomous and Shared Vehicles Technology for Sustainable Green Mobility

    Get PDF
    The automotive sector digitalization accelerates the technology convergence of perception, computing processing, connectivity, propulsion, and data fusion for electric connected autonomous and shared (ECAS) vehicles. This brings cutting-edge computing paradigms with embedded cognitive capabilities into vehicle domains and data infrastructure to provide holistic intrinsic and extrinsic intelligence for new mobility applications. Digital technologies are a significant enabler in achieving the sustainability goals of the green transformation of the mobility and transportation sectors. Innovation occurs predominantly in ECAS vehicles’ architecture, operations, intelligent functions, and automotive digital infrastructure. The traditional ownership model is moving toward multimodal and shared mobility services. The ECAS vehicle’s technology allows for the development of virtual automotive functions that run on shared hardware platforms with data unlocking value, and for introducing new, shared computing-based automotive features. Facilitating vehicle automation, vehicle electrification, vehicle-to-everything (V2X) communication is accomplished by the convergence of artificial intelligence (AI), cellular/wireless connectivity, edge computing, the Internet of things (IoT), the Internet of intelligent things (IoIT), digital twins (DTs), virtual/augmented reality (VR/AR) and distributed ledger technologies (DLTs). Vehicles become more intelligent, connected, functioning as edge micro servers on wheels, powered by sensors/actuators, hardware (HW), software (SW) and smart virtual functions that are integrated into the digital infrastructure. Electrification, automation, connectivity, digitalization, decarbonization, decentralization, and standardization are the main drivers that unlock intelligent vehicles' potential for sustainable green mobility applications. ECAS vehicles act as autonomous agents using swarm intelligence to communicate and exchange information, either directly or indirectly, with each other and the infrastructure, accessing independent services such as energy, high-definition maps, routes, infrastructure information, traffic lights, tolls, parking (micropayments), and finding emergent/intelligent solutions. The article gives an overview of the advances in AI technologies and applications to realize intelligent functions and optimize vehicle performance, control, and decision-making for future ECAS vehicles to support the acceleration of deployment in various mobility scenarios. ECAS vehicles, systems, sub-systems, and components are subjected to stringent regulatory frameworks, which set rigorous requirements for autonomous vehicles. An in-depth assessment of existing standards, regulations, and laws, including a thorough gap analysis, is required. Global guidelines must be provided on how to fulfill the requirements. ECAS vehicle technology trustworthiness, including AI-based HW/SW and algorithms, is necessary for developing ECAS systems across the entire automotive ecosystem. The safety and transparency of AI-based technology and the explainability of the purpose, use, benefits, and limitations of AI systems are critical for fulfilling trustworthiness requirements. The article presents ECAS vehicles’ evolution toward domain controller, zonal vehicle, and federated vehicle/edge/cloud-centric based on distributed intelligence in the vehicle and infrastructure level architectures and the role of AI techniques and methods to implement the different autonomous driving and optimization functions for sustainable green mobility.publishedVersio

    V2X-AHD:Vehicle-to-Everything Cooperation Perception via Asymmetric Heterogenous Distillation Network

    Full text link
    Object detection is the central issue of intelligent traffic systems, and recent advancements in single-vehicle lidar-based 3D detection indicate that it can provide accurate position information for intelligent agents to make decisions and plan. Compared with single-vehicle perception, multi-view vehicle-road cooperation perception has fundamental advantages, such as the elimination of blind spots and a broader range of perception, and has become a research hotspot. However, the current perception of cooperation focuses on improving the complexity of fusion while ignoring the fundamental problems caused by the absence of single-view outlines. We propose a multi-view vehicle-road cooperation perception system, vehicle-to-everything cooperative perception (V2X-AHD), in order to enhance the identification capability, particularly for predicting the vehicle's shape. At first, we propose an asymmetric heterogeneous distillation network fed with different training data to improve the accuracy of contour recognition, with multi-view teacher features transferring to single-view student features. While the point cloud data are sparse, we propose Spara Pillar, a spare convolutional-based plug-in feature extraction backbone, to reduce the number of parameters and improve and enhance feature extraction capabilities. Moreover, we leverage the multi-head self-attention (MSA) to fuse the single-view feature, and the lightweight design makes the fusion feature a smooth expression. The results of applying our algorithm to the massive open dataset V2Xset demonstrate that our method achieves the state-of-the-art result. The V2X-AHD can effectively improve the accuracy of 3D object detection and reduce the number of network parameters, according to this study, which serves as a benchmark for cooperative perception. The code for this article is available at https://github.com/feeling0414-lab/V2X-AHD

    Augmenting CCAM Infrastructure for Creating Smart Roads and Enabling Autonomous Driving

    Get PDF
    Autonomous vehicles and smart roads are not new concepts and the undergoing development to empower the vehicles for higher levels of automation has achieved initial milestones. However, the transportation industry and relevant research communities still require making considerable efforts to create smart and intelligent roads for autonomous driving. To achieve the results of such efforts, the CCAM infrastructure is a game changer and plays a key role in achieving higher levels of autonomous driving. In this paper, we present a smart infrastructure and autonomous driving capabilities enhanced by CCAM infrastructure. Meaning thereby, we lay down the technical requirements of the CCAM infrastructure: identify the right set of the sensory infrastructure, their interfacing, integration platform, and necessary communication interfaces to be interconnected with upstream and downstream solution components. Then, we parameterize the road and network infrastructures (and automated vehicles) to be advanced and evaluated during the research work, under the very distinct scenarios and conditions. For validation, we demonstrate the machine learning algorithms in mobility applications such as traffic flow and mobile communication demands. Consequently, we train multiple linear regression models and achieve accuracy of over 94% for predicting aforementioned demands on a daily basis. This research therefore equips the readers with relevant technical information required for enhancing CCAM infrastructure. It also encourages and guides the relevant research communities to implement the CCAM infrastructure towards creating smart and intelligent roads for autonomous driving

    Cooperative Perception for Connected and Automated Vehicles: Evaluation and Impact of Congestion Control

    Get PDF
    Automated vehicles make use of multiple sensors to detect their surroundings. Sensors have significantly improved over the years but still face challenges due to the presence of obstacles or adverse weather conditions, among others. Cooperative or collective perception has been proposed to help mitigate these challenges through the exchange of sensor data among vehicles using V2X (Vehicle-to-Everything) communications. Recent studies have shown that cooperative perception can complement on-board sensors and increase the vehicle's awareness beyond its sensors field of view. However, cooperative perception significantly increases the amount of information exchanged by vehicles which can degrade the V2X communication performance and ultimately the effectiveness of cooperative perception. In this context, this study conducts first a dimensioning analysis to evaluate the impact of the sensors' characteristics and the market penetration rate on the operation and performance of cooperative perception. The study then investigates the impact of congestion control on cooperative perception using the Decentralized Congestion Control (DCC) framework defined by ETSI. The study demonstrates that congestion control can negatively impact the perception and latency of cooperative perception if not adequately configured. In this context, this study demonstrates for the first time that the combination of congestion control functions at the Access and Facilities layers can improve the perception achieved with cooperative perception and ensure a timely transmission of the information. The results obtained demonstrate the importance of an adequate configuration of DCC for the development of connected and automated vehicles

    Safe Intelligent Driver Assistance System in V2X Communication Environments based on IoT

    Get PDF
    In the modern world, power and speed of cars have increased steadily, as traffic continued to increase. At the same time highway-related fatalities and injuries due to road incidents are constantly growing and safety problems come first. Therefore, the development of Driver Assistance Systems (DAS) has become a major issue. Numerous innovations, systems and technologies have been developed in order to improve road transportation and safety. Modern computer vision algorithms enable cars to understand the road environment with low miss rates. A number of Intelligent Transportation Systems (ITSs), Vehicle Ad-Hoc Networks (VANETs) have been applied in the different cities over the world. Recently, a new global paradigm, known as the Internet of Things (IoT) brings new idea to update the existing solutions. Vehicle-to-Infrastructure communication based on IoT technologies would be a next step in intelligent transportation for the future Internet-of-Vehicles (IoV). The overall purpose of this research was to come up with a scalable IoT solution for driver assistance, which allows to combine safety relevant information for a driver from different types of in-vehicle sensors, in-vehicle DAS, vehicle networks and driver`s gadgets. This study brushed up on the evolution and state-of-the-art of Vehicle Systems. Existing ITSs, VANETs and DASs were evaluated in the research. The study proposed a design approach for the future development of transport systems applying IoT paradigm to the transport safety applications in order to enable driver assistance become part of Internet of Vehicles (IoV). The research proposed the architecture of the Safe Intelligent DAS (SiDAS) based on IoT V2X communications in order to combine different types of data from different available devices and vehicle systems. The research proposed IoT ARM structure for SiDAS, data flow diagrams, protocols. The study proposes several IoT system structures for the vehicle-pedestrian and vehicle-vehicle collision prediction as case studies for the flexible SiDAS framework architecture. The research has demonstrated the significant increase in driver situation awareness by using IoT SiDAS, especially in NLOS conditions. Moreover, the time analysis, taking into account IoT, Cloud, LTE and DSRS latency, has been provided for different collision scenarios, in order to evaluate the overall system latency and ensure applicability for real-time driver emergency notification. Experimental results demonstrate that the proposed SiDAS improves traffic safety

    Collaborative Perception in Autonomous Driving: Methods, Datasets and Challenges

    Full text link
    Collaborative perception is essential to address occlusion and sensor failure issues in autonomous driving. In recent years, theoretical and experimental investigations of novel works for collaborative perception have increased tremendously. So far, however, few reviews have focused on systematical collaboration modules and large-scale collaborative perception datasets. This work reviews recent achievements in this field to bridge this gap and motivate future research. We start with a brief overview of collaboration schemes. After that, we systematically summarize the collaborative perception methods for ideal scenarios and real-world issues. The former focuses on collaboration modules and efficiency, and the latter is devoted to addressing the problems in actual application. Furthermore, we present large-scale public datasets and summarize quantitative results on these benchmarks. Finally, we highlight gaps and overlook challenges between current academic research and real-world applications. The project page is https://github.com/CatOneTwo/Collaborative-Perception-in-Autonomous-DrivingComment: 18 pages, 6 figures. Accepted by IEEE Intelligent Transportation Systems Magazine. URL: https://github.com/CatOneTwo/Collaborative-Perception-in-Autonomous-Drivin

    Shared Situational Awareness with V2X Communication and Set-membership Estimation

    Full text link
    The ability to perceive and comprehend a traffic situation and to estimate the state of the vehicles and road-users in the surrounding of the ego-vehicle is known as situational awareness. Situational awareness for a heavy-duty autonomous vehicle is a critical part of the automation platform and depends on the ego-vehicle's field-of-view. But when it comes to the urban scenario, the field-of-view of the ego-vehicle is likely to be affected by occlusion and blind spots caused by infrastructure, moving vehicles, and parked vehicles. This paper proposes a framework to improve situational awareness using set-membership estimation and Vehicle-to-Everything (V2X) communication. This framework provides safety guarantees and can adapt to dynamically changing scenarios, and is integrated into an existing complex autonomous platform. A detailed description of the framework implementation and real-time results are illustrated in this paper

    Artificial Intelligence-based Cybersecurity for Connected and Automated Vehicles

    Get PDF
    The damaging effects of cyberattacks to an industry like the Cooperative Connected and Automated Mobility (CCAM) can be tremendous. From the least important to the worst ones, one can mention for example the damage in the reputation of vehicle manufacturers, the increased denial of customers to adopt CCAM, the loss of working hours (having direct impact on the European GDP), material damages, increased environmental pollution due e.g., to traffic jams or malicious modifications in sensors’ firmware, and ultimately, the great danger for human lives, either they are drivers, passengers or pedestrians. Connected vehicles will soon become a reality on our roads, bringing along new services and capabilities, but also technical challenges and security threats. To overcome these risks, the CARAMEL project has developed several anti-hacking solutions for the new generation of vehicles. CARAMEL (Artificial Intelligence-based Cybersecurity for Connected and Automated Vehicles), a research project co-funded by the European Union under the Horizon 2020 framework programme, is a project consortium with 15 organizations from 8 European countries together with 3 Korean partners. The project applies a proactive approach based on Artificial Intelligence and Machine Learning techniques to detect and prevent potential cybersecurity threats to autonomous and connected vehicles. This approach has been addressed based on four fundamental pillars, namely: Autonomous Mobility, Connected Mobility, Electromobility, and Remote Control Vehicle. This book presents theory and results from each of these technical directions
    • …
    corecore