8,669 research outputs found

    The OpenCDA Open-source Ecosystem for Cooperative Driving Automation Research

    Full text link
    Advances in Single-vehicle intelligence of automated driving have encountered significant challenges because of limited capabilities in perception and interaction with complex traffic environments. Cooperative Driving Automation~(CDA) has been considered a pivotal solution to next-generation automated driving and intelligent transportation. Though CDA has attracted much attention from both academia and industry, exploration of its potential is still in its infancy. In industry, companies tend to build their in-house data collection pipeline and research tools to tailor their needs and protect intellectual properties. Reinventing the wheels, however, wastes resources and limits the generalizability of the developed approaches since no standardized benchmarks exist. On the other hand, in academia, due to the absence of real-world traffic data and computation resources, researchers often investigate CDA topics in simplified and mostly simulated environments, restricting the possibility of scaling the research outputs to real-world scenarios. Therefore, there is an urgent need to establish an open-source ecosystem~(OSE) to address the demands of different communities for CDA research, particularly in the early exploratory research stages, and provide the bridge to ensure an integrated development and testing pipeline that diverse communities can share. In this paper, we introduce the OpenCDA research ecosystem, a unified OSE integrated with a model zoo, a suite of driving simulators at various resolutions, large-scale real-world and simulated datasets, complete development toolkits for benchmark training/testing, and a scenario database/generator. We also demonstrate the effectiveness of OpenCDA OSE through example use cases, including cooperative 3D LiDAR detection, cooperative merge, cooperative camera-based map prediction, and adversarial scenario generation

    Cosmic cookery : making a stereoscopic 3D animated movie.

    Get PDF
    This paper describes our experience making a short stereoscopic movie visualizing the development of structure in the universe during the 13.7 billion years from the Big Bang to the present day. Aimed at a general audience for the Royal Society's 2005 Summer Science Exhibition, the movie illustrates how the latest cosmological theories based on dark matter and dark energy are capable of producing structures as complex as spiral galaxies and allows the viewer to directly compare observations from the real universe with theoretical results. 3D is an inherent feature of the cosmology data sets and stereoscopic visualization provides a natural way to present the images to the viewer, in addition to allowing researchers to visualize these vast, complex data sets. The presentation of the movie used passive, linearly polarized projection onto a 2m wide screen but it was also required to playback on a Sharp RD3D display and in anaglyph projection at venues without dedicated stereoscopic display equipment. Additionally lenticular prints were made from key images in the movie. We discuss the following technical challenges during the stereoscopic production process; 1) Controlling the depth presentation, 2) Editing the stereoscopic sequences, 3) Generating compressed movies in display speciÂŻc formats. We conclude that the generation of high quality stereoscopic movie content using desktop tools and equipment is feasible. This does require careful quality control and manual intervention but we believe these overheads are worthwhile when presenting inherently 3D data as the result is signiÂŻcantly increased impact and better understanding of complex 3D scenes

    TalkyCars: A Distributed Software Platform for Cooperative Perception among Connected Autonomous Vehicles based on Cellular-V2X Communication

    Get PDF
    Autonomous vehicles are required to operate among highly mixed traffic during their early market-introduction phase, solely relying on local sensory with limited range. Exhaustively comprehending and navigating complex urban environments is potentially not feasible with sufficient reliability using the aforesaid approach. Addressing this challenge, intelligent vehicles can virtually increase their perception range beyond their line of sight by utilizing Vehicle-to-Everything (V2X) communication with surrounding traffic participants to perform cooperative perception. Since existing solutions face a variety of limitations, including lack of comprehensiveness, universality and scalability, this thesis aims to conceptualize, implement and evaluate an end-to-end cooperative perception system using novel techniques. A comprehensive yet extensible modeling approach for dynamic traffic scenes is proposed first, which is based on probabilistic entity-relationship models, accounts for uncertain environments and combines low-level attributes with high-level relational- and semantic knowledge in a generic way. Second, the design of a holistic, distributed software architecture based on edge computing principles is proposed as a foundation for multi-vehicle high-level sensor fusion. In contrast to most existing approaches, the presented solution is designed to rely on Cellular-V2X communication in 5G networks and employs geographically distributed fusion nodes as part of a client-server configuration. A modular proof-of-concept implementation is evaluated in different simulated scenarios to assess the system\u27s performance both qualitatively and quantitatively. Experimental results show that the proposed system scales adequately to meet certain minimum requirements and yields an average improvement in overall perception quality of approximately 27 %

    Image-guided Landmark-based Localization and Mapping with LiDAR

    Get PDF
    Mobile robots must be able to determine their position to operate effectively in diverse environments. The presented work proposes a system that integrates LiDAR and camera sensors and utilizes the YOLO object detection model to identify objects in the robot's surroundings. The system, developed in ROS, groups detected objects into triangles, utilizing them as landmarks to determine the robot's position. A triangulation algorithm is employed to obtain the robot's position, which generates a set of nonlinear equations that are solved using the Levenberg-Marquardt algorithm. The presented work comprehensively discusses the proposed system's study, design, and implementation. The investigation begins with an overview of current SLAM techniques. Next, the system design considers the requirements for localization and mapping tasks and an analysis comparing the proposed approach to the contemporary SLAM methods. Finally, we evaluate the system's effectiveness and accuracy through experimentation in the Gazebo simulation environment, which allows for controlling various disturbances that a real scenario can introduce

    Modeling and Simulation in Engineering

    Get PDF
    This book provides an open platform to establish and share knowledge developed by scholars, scientists, and engineers from all over the world, about various applications of the modeling and simulation in the design process of products, in various engineering fields. The book consists of 12 chapters arranged in two sections (3D Modeling and Virtual Prototyping), reflecting the multidimensionality of applications related to modeling and simulation. Some of the most recent modeling and simulation techniques, as well as some of the most accurate and sophisticated software in treating complex systems, are applied. All the original contributions in this book are jointed by the basic principle of a successful modeling and simulation process: as complex as necessary, and as simple as possible. The idea is to manipulate the simplifying assumptions in a way that reduces the complexity of the model (in order to make a real-time simulation), but without altering the precision of the results

    Discrete anisotropic radiative transfer (DART 5) for modeling airborne and satellite spectroradiometer and LIDAR acquisitions of natural and urban landscapes

    Get PDF
    International audienceSatellite and airborne optical sensors are increasingly used by scientists, and policy makers, and managers for studying and managing forests, agriculture crops, and urban areas. Their data acquired with given instrumental specifications (spectral resolution, viewing direction, sensor field-of-view, etc.) and for a specific experimental configuration (surface and atmosphere conditions, sun direction, etc.) are commonly translated into qualitative and quantitative Earth surface parameters. However, atmosphere properties and Earth surface 3D architecture often confound their interpretation. Radiative transfer models capable of simulating the Earth and atmosphere complexity are, therefore, ideal tools for linking remotely sensed data to the surface parameters. Still, many existing models are oversimplifying the Earth-atmosphere system interactions and their parameterization of sensor specifications is often neglected or poorly considered. The Discrete Anisotropic Radiative Transfer (DART) model is one of the most comprehensive physically based 3D models simulating the Earth-atmosphere radiation interaction from visible to thermal infrared wavelengths. It has been developed since 1992. It models optical signals at the entrance of imaging radiometers and laser scanners on board of satellites and airplanes, as well as the 3D radiative budget, of urban and natural landscapes for any experimental configuration and instrumental specification. It is freely distributed for research and teaching activities. This paper presents DART physical bases and its latest functionality for simulating imaging spectroscopy of natural and urban landscapes with atmosphere, including the perspective projection of airborne acquisitions and LIght Detection And Ranging (LIDAR) waveform and photon counting signals

    Towards Vehicle-to-everything Autonomous Driving: A Survey on Collaborative Perception

    Full text link
    Vehicle-to-everything (V2X) autonomous driving opens up a promising direction for developing a new generation of intelligent transportation systems. Collaborative perception (CP) as an essential component to achieve V2X can overcome the inherent limitations of individual perception, including occlusion and long-range perception. In this survey, we provide a comprehensive review of CP methods for V2X scenarios, bringing a profound and in-depth understanding to the community. Specifically, we first introduce the architecture and workflow of typical V2X systems, which affords a broader perspective to understand the entire V2X system and the role of CP within it. Then, we thoroughly summarize and analyze existing V2X perception datasets and CP methods. Particularly, we introduce numerous CP methods from various crucial perspectives, including collaboration stages, roadside sensors placement, latency compensation, performance-bandwidth trade-off, attack/defense, pose alignment, etc. Moreover, we conduct extensive experimental analyses to compare and examine current CP methods, revealing some essential and unexplored insights. Specifically, we analyze the performance changes of different methods under different bandwidths, providing a deep insight into the performance-bandwidth trade-off issue. Also, we examine methods under different LiDAR ranges. To study the model robustness, we further investigate the effects of various simulated real-world noises on the performance of different CP methods, covering communication latency, lossy communication, localization errors, and mixed noises. In addition, we look into the sim-to-real generalization ability of existing CP methods. At last, we thoroughly discuss issues and challenges, highlighting promising directions for future efforts. Our codes for experimental analysis will be public at https://github.com/memberRE/Collaborative-Perception.Comment: 19 page
    • …
    corecore