406 research outputs found

    A Review of Sensor Technologies for Perception in Automated Driving

    Get PDF
    After more than 20 years of research, ADAS are common in modern vehicles available in the market. Automated Driving systems, still in research phase and limited in their capabilities, are starting early commercial tests in public roads. These systems rely on the information provided by on-board sensors, which allow to describe the state of the vehicle, its environment and other actors. Selection and arrangement of sensors represent a key factor in the design of the system. This survey reviews existing, novel and upcoming sensor technologies, applied to common perception tasks for ADAS and Automated Driving. They are put in context making a historical review of the most relevant demonstrations on Automated Driving, focused on their sensing setup. Finally, the article presents a snapshot of the future challenges for sensing technologies and perception, finishing with an overview of the commercial initiatives and manufacturers alliances that will show future market trends in sensors technologies for Automated Vehicles.This work has been partly supported by ECSEL Project ENABLE- S3 (with grant agreement number 692455-2), by the Spanish Government through CICYT projects (TRA2015- 63708-R and TRA2016-78886-C3-1-R)

    Perception architecture exploration for automotive cyber-physical systems

    Get PDF
    2022 Spring.Includes bibliographical references.In emerging autonomous and semi-autonomous vehicles, accurate environmental perception by automotive cyber physical platforms are critical for achieving safety and driving performance goals. An efficient perception solution capable of high fidelity environment modeling can improve Advanced Driver Assistance System (ADAS) performance and reduce the number of lives lost to traffic accidents as a result of human driving errors. Enabling robust perception for vehicles with ADAS requires solving multiple complex problems related to the selection and placement of sensors, object detection, and sensor fusion. Current methods address these problems in isolation, which leads to inefficient solutions. For instance, there is an inherent accuracy versus latency trade-off between one stage and two stage object detectors which makes selecting an enhanced object detector from a diverse range of choices difficult. Further, even if a perception architecture was equipped with an ideal object detector performing high accuracy and low latency inference, the relative position and orientation of selected sensors (e.g., cameras, radars, lidars) determine whether static or dynamic targets are inside the field of view of each sensor or in the combined field of view of the sensor configuration. If the combined field of view is too small or contains redundant overlap between individual sensors, important events and obstacles can go undetected. Conversely, if the combined field of view is too large, the number of false positive detections will be high in real time and appropriate sensor fusion algorithms are required for filtering. Sensor fusion algorithms also enable tracking of non-ego vehicles in situations where traffic is highly dynamic or there are many obstacles on the road. Position and velocity estimation using sensor fusion algorithms have a lower margin for error when trajectories of other vehicles in traffic are in the vicinity of the ego vehicle, as incorrect measurement can cause accidents. Due to the various complex inter-dependencies between design decisions, constraints and optimization goals a framework capable of synthesizing perception solutions for automotive cyber physical platforms is not trivial. We present a novel perception architecture exploration framework for automotive cyber- physical platforms capable of global co-optimization of deep learning and sensing infrastructure. The framework is capable of exploring the synthesis of heterogeneous sensor configurations towards achieving vehicle autonomy goals. As our first contribution, we propose a novel optimization framework called VESPA that explores the design space of sensor placement locations and orientations to find the optimal sensor configuration for a vehicle. We demonstrate how our framework can obtain optimal sensor configurations for heterogeneous sensors deployed across two contemporary real vehicles. We then utilize VESPA to create a comprehensive perception architecture synthesis framework called PASTA. This framework enables robust perception for vehicles with ADAS requiring solutions to multiple complex problems related not only to the selection and placement of sensors but also object detection, and sensor fusion as well. Experimental results with the Audi-TT and BMW Minicooper vehicles show how PASTA can intelligently traverse the perception design space to find robust, vehicle-specific solutions

    RadChat: Spectrum Sharing for Automotive Radar Interference Mitigation

    Get PDF
    In the automotive sector, both radars and wireless communication are susceptible to interference. However, combining the radar and communication systems, i.e., radio frequency (RF) communications and sensing convergence, has the potential to mitigate interference in both systems. This article analyses the mutual interference of spectrally coexistent frequency modulated continuous wave (FMCW) radar and communication systems in terms of occurrence probability and impact, and introduces RadChat, a distributed networking protocol for mitigation of interference among FMCW based automotive radars, including self-interference, using radar and communication cooperation. The results show that RadChat can significantly reduce radar mutual interference in single-hop vehicular networks in less than 80 ms

    Merge recommendations for driver assistance: A cross-modal, cost-sensitive approach

    Full text link
    In this study, we present novel work focused on assisting the driver during merge maneuvers. We use an automotive testbed instrumented with sensors for monitoring critical regions in the vehicle's surround. Fusing information from multiple sensor modalities, we integrate measurements into a contextually relevant, intuitive, general representation, which we term the Dynamic Probabilistic Drivability Map [DPDM]. We formulate the DPDM for driver assistance as a compact representation of the surround environment, integrating vehicle tracking information, lane information, road geometry, obstacle detection, and ego-vehicle dynamics. Given a robust understanding of the ego-vehicle's dynamics, other vehicles, and the on-road environment, our system recommends merge maneuvers to the driver, formulating the maneuver as a dynamic programming problem over the DPDM, searching for the minimum cost solution for merging. Based on the configuration of the road, lanes, and other vehicles on the road, the system recommends the appropriate acceleration or deceleration for merging into the adjacent lane, specifying when and how to merge

    High-Gain Millimeter-Wave Planar Array Antennas with Traveling-Wave Excitation

    Get PDF

    Sensor Technologies for Intelligent Transportation Systems

    Get PDF
    Modern society faces serious problems with transportation systems, including but not limited to traffic congestion, safety, and pollution. Information communication technologies have gained increasing attention and importance in modern transportation systems. Automotive manufacturers are developing in-vehicle sensors and their applications in different areas including safety, traffic management, and infotainment. Government institutions are implementing roadside infrastructures such as cameras and sensors to collect data about environmental and traffic conditions. By seamlessly integrating vehicles and sensing devices, their sensing and communication capabilities can be leveraged to achieve smart and intelligent transportation systems. We discuss how sensor technology can be integrated with the transportation infrastructure to achieve a sustainable Intelligent Transportation System (ITS) and how safety, traffic control and infotainment applications can benefit from multiple sensors deployed in different elements of an ITS. Finally, we discuss some of the challenges that need to be addressed to enable a fully operational and cooperative ITS environment

    MEMS based radar sensor for automotive collision avoidance

    Get PDF
    This dissertation presents the architecture of a new MEMS based 77 GHz frequency modulated continuous wave (FMCW) automotive long range radar sensor. The design, modeling, and fabrication of a novel MEMS based TE10 mode Rotman lens. MEMS based Single-pole-triple-throw (SP3T) RF switches and an inset feed type microstrip antenna array that form the core components of the newly developed radar sensor. The novel silicon based Rotman lens exploits the principle of a TE10 mode rectangular waveguide that enabled to realize the lens in silicon using conventional microfabrication technique with a cavity depth of 50 μm and a footprint area to 27 mm x 36.2 mm for 77 GHz operation. The microfabricated Rotman lens replaces the conventional microelectronics based analog or digital beamformers as used in state-of-the-art automotive long range radars to results in a smaller form-factor superior performance less complex low cost radar sensor. The developed Rotman lens has 3 beam ports, 5 array ports, 6 dummy ports and HFSS simulation exhibits better than -2 dB insertion loss and better than -20 dB return loss between the beam ports and the array ports. A MEMS based 77 GHz SP3T cantilever type RF switch with conventional ground connecting bridges (GCB) has been designed, modelled, and fabricated to sequentially switch the FMCW signal among the beam ports of the Rotman lens. A new continuous ground (CG) SP3T switch has been designed and modeled that shows a 4 dB improvement in return loss, 0.5 dB improvement in insertion loss and an isolation improvement of 3.5 dB over the conventional GCB type switch. The fabrication of the CG type switch is in progress. Both the switches have a footprint area of 500 µm x 500 μm. An inset feed type 77 GHz microstrip antenna array has been designed, modelled, and fabricated on a Duroid 5880 substrate using a laser ablation technique. The 12 mm x 35 mm footprint area antenna array consists of 5 sub-arrays with 12 microstrip patches in each of the sub-arrays. HFSS simulation result shows a gain of 18.3 dB, efficiency of 77% and half power beam width of 9°

    3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection

    Full text link
    Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction

    Towards an Automatic Parking System using Bio-Inspired 1-D Optical Flow Sensors

    No full text
    International audienceAlthough several (semi-) automatic parking systems have been presented throughout the years [1]–[12], car manufacturers are still looking for low-cost sensors providing redundant information about the obstacles around the vehicle, as well as efficient methods of processing this information, in the hope of achieving a very high level of robustness. We therefore investigated how Local Motion Sensors (LMSs) [13], [14], comprising only of a few pixels giving 1-D optical flow (OF) measurements could be used to improve automatic parking maneuvers. For this purpose, we developed a low computational-cost method of detecting and tracking a parking spot in real time using 1-D OF measurements around the vehicle as well as the vehicle's longitudinal velocity and steering angle. The algorithm used was composed of 5 processing steps, which will be described here in detail. In this initial report, we will first present some results obtained in a highly simplified 2-D parking simulation performed using Matlab/Simulink software, before giving some preliminary experimental results obtained with the first step in the algorithm in the case of a vehicle equipped with two 6-pixel LMSs. The results of the closed-loop simulation show that up to a certain noise level, the simulated vehicle detected and tracked the parking-spot assessment in real time. The preliminary experimental results show that the average refresh frequency obtained with the LMSs was about 2-3 times higher than that obtained with standard ultrasonic sensors and cameras, and that these LMSs therefore constitute a promising alternative basis for designing new automatic parking systems
    • …
    corecore