2,370 research outputs found

    Quantitative Performance Assessment of LiDAR-based Vehicle Contour Estimation Algorithms for Integrated Vehicle Safety Applications

    Get PDF
    Many nations and organizations are committing to achieving the goal of `Vision Zero\u27 and eliminate road traffic related deaths around the world. Industry continues to develop integrated safety systems to make vehicles safer, smarter and more capable in safety critical scenarios. Passive safety systems are now focusing on pre-crash deployment of restraint systems to better protect vehicle passengers. Current commonly used bounding box methods for shape estimation of crash partners lack the fidelity required for edge case collision detection and advanced crash modeling. This research presents a novel algorithm for robust and accurate contour estimation of opposing vehicles. The presented method is evaluated via a developed framework for key performance metrics and compared to alternative algorithms found in literature

    A Study on Recent Developments and Issues with Obstacle Detection Systems for Automated Vehicles

    Get PDF
    This paper reviews current developments and discusses some critical issues with obstacle detection systems for automated vehicles. The concept of autonomous driving is the driver towards future mobility. Obstacle detection systems play a crucial role in implementing and deploying autonomous driving on our roads and city streets. The current review looks at technology and existing systems for obstacle detection. Specifically, we look at the performance of LIDAR, RADAR, vision cameras, ultrasonic sensors, and IR and review their capabilities and behaviour in a number of different situations: during daytime, at night, in extreme weather conditions, in urban areas, in the presence of smooths surfaces, in situations where emergency service vehicles need to be detected and recognised, and in situations where potholes need to be observed and measured. It is suggested that combining different technologies for obstacle detection gives a more accurate representation of the driving environment. In particular, when looking at technological solutions for obstacle detection in extreme weather conditions (rain, snow, fog), and in some specific situations in urban areas (shadows, reflections, potholes, insufficient illumination), although already quite advanced, the current developments appear to be not sophisticated enough to guarantee 100% precision and accuracy, hence further valiant effort is needed

    A Review of Sensor Technologies for Perception in Automated Driving

    Get PDF
    After more than 20 years of research, ADAS are common in modern vehicles available in the market. Automated Driving systems, still in research phase and limited in their capabilities, are starting early commercial tests in public roads. These systems rely on the information provided by on-board sensors, which allow to describe the state of the vehicle, its environment and other actors. Selection and arrangement of sensors represent a key factor in the design of the system. This survey reviews existing, novel and upcoming sensor technologies, applied to common perception tasks for ADAS and Automated Driving. They are put in context making a historical review of the most relevant demonstrations on Automated Driving, focused on their sensing setup. Finally, the article presents a snapshot of the future challenges for sensing technologies and perception, finishing with an overview of the commercial initiatives and manufacturers alliances that will show future market trends in sensors technologies for Automated Vehicles.This work has been partly supported by ECSEL Project ENABLE- S3 (with grant agreement number 692455-2), by the Spanish Government through CICYT projects (TRA2015- 63708-R and TRA2016-78886-C3-1-R)

    Challenges in passenger use of mixed reality headsets in cars and other transportation

    Get PDF
    This paper examines key challenges in supporting passenger use of augmented and virtual reality headsets in transit. These headsets will allow passengers to break free from the restraints of physical displays placed in constrained environments such as cars, trains and planes. Moreover, they have the potential to allow passengers to make better use of their time by making travel more productive and enjoyable, supporting both privacy and immersion. However, there are significant barriers to headset usage by passengers in transit contexts. These barriers range from impediments that would entirely prevent safe usage and function (e.g. motion sickness) to those that might impair their adoption (e.g. social acceptability). We identify the key challenges that need to be overcome and discuss the necessary resolutions and research required to facilitate adoption and realize the potential advantages of using mixed reality headsets in transit

    Application of radar for automotive collision avoidance. Volume 1: Technical report

    Get PDF
    The purpose of this project was research and development of an automobile collision avoidance radar system. The major finding was that the application of radar to the automobile collision avoidance problem deserves continued research even though the specific approach investigated in this effort did not perform adequately in its angle measurement capability. Additional findings were that: (1) preliminary performance requirements of a candidate radar system are not unreasonable; (2) the number and severity of traffic accidents could be reduced by using a collision avoidance radar system which observes a fairly wide (at least + or - 10 deg) field of view ahead of the vehicle; (3) the health radiation hazards of a probable radar design are not significant even when a large number of radar-equipped vehicles are considered; (4) effects of inclement weather on radar operation can be accommodated in most cases; (5) the phase monopulse radar technique as implemented demonstrated inferior angle measurement performance which warrants the recommendation of investigating alternative radar techniques; and (6) extended target and multipath effects, which presumably distort the amplitude and phase distribution across the antenna aperture, are responsible for the observed inadequate phase monopulse radar performance

    Perception architecture exploration for automotive cyber-physical systems

    Get PDF
    2022 Spring.Includes bibliographical references.In emerging autonomous and semi-autonomous vehicles, accurate environmental perception by automotive cyber physical platforms are critical for achieving safety and driving performance goals. An efficient perception solution capable of high fidelity environment modeling can improve Advanced Driver Assistance System (ADAS) performance and reduce the number of lives lost to traffic accidents as a result of human driving errors. Enabling robust perception for vehicles with ADAS requires solving multiple complex problems related to the selection and placement of sensors, object detection, and sensor fusion. Current methods address these problems in isolation, which leads to inefficient solutions. For instance, there is an inherent accuracy versus latency trade-off between one stage and two stage object detectors which makes selecting an enhanced object detector from a diverse range of choices difficult. Further, even if a perception architecture was equipped with an ideal object detector performing high accuracy and low latency inference, the relative position and orientation of selected sensors (e.g., cameras, radars, lidars) determine whether static or dynamic targets are inside the field of view of each sensor or in the combined field of view of the sensor configuration. If the combined field of view is too small or contains redundant overlap between individual sensors, important events and obstacles can go undetected. Conversely, if the combined field of view is too large, the number of false positive detections will be high in real time and appropriate sensor fusion algorithms are required for filtering. Sensor fusion algorithms also enable tracking of non-ego vehicles in situations where traffic is highly dynamic or there are many obstacles on the road. Position and velocity estimation using sensor fusion algorithms have a lower margin for error when trajectories of other vehicles in traffic are in the vicinity of the ego vehicle, as incorrect measurement can cause accidents. Due to the various complex inter-dependencies between design decisions, constraints and optimization goals a framework capable of synthesizing perception solutions for automotive cyber physical platforms is not trivial. We present a novel perception architecture exploration framework for automotive cyber- physical platforms capable of global co-optimization of deep learning and sensing infrastructure. The framework is capable of exploring the synthesis of heterogeneous sensor configurations towards achieving vehicle autonomy goals. As our first contribution, we propose a novel optimization framework called VESPA that explores the design space of sensor placement locations and orientations to find the optimal sensor configuration for a vehicle. We demonstrate how our framework can obtain optimal sensor configurations for heterogeneous sensors deployed across two contemporary real vehicles. We then utilize VESPA to create a comprehensive perception architecture synthesis framework called PASTA. This framework enables robust perception for vehicles with ADAS requiring solutions to multiple complex problems related not only to the selection and placement of sensors but also object detection, and sensor fusion as well. Experimental results with the Audi-TT and BMW Minicooper vehicles show how PASTA can intelligently traverse the perception design space to find robust, vehicle-specific solutions

    A perspective on emerging automotive safety applications, derived from lessons learned through participation in the DARPA Grand Challenges

    Full text link
    This paper reports on various aspects of the Intelligent Vehicle Systems (IVS) team's involvement in the recent 2007 DARPA Urban Challenge, wherein our platform, the autonomous “XAV-250,'' competed as one of the 11 finalists qualifying for the event. We provide a candid discussion of the hardware and software design process that led to our team's entry, along with lessons learned at this event and derived from participation in the two previous Grand Challenges. In addition, we give an overview of our vision-, radar-, and LIDAR-based perceptual sensing suite, its fusion with a military-grade inertial navigation package, and the map-based control and planning architectures used leading up to and during the event. The underlying theme of this article is to elucidate how the development of future automotive safety systems can potentially be accelerated by tackling the technological challenges of autonomous ground vehicle robotics. Of interest, we will discuss how a production manufacturing mindset imposes a unique set of constraints upon approaching the problem and how this worked for and against us, given the very compressed timeline of the contests. © 2008 Wiley Periodicals, Inc.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/61244/1/20264_ftp.pd
    • …
    corecore