49 research outputs found

    Perception architecture exploration for automotive cyber-physical systems

    Get PDF
    2022 Spring.Includes bibliographical references.In emerging autonomous and semi-autonomous vehicles, accurate environmental perception by automotive cyber physical platforms are critical for achieving safety and driving performance goals. An efficient perception solution capable of high fidelity environment modeling can improve Advanced Driver Assistance System (ADAS) performance and reduce the number of lives lost to traffic accidents as a result of human driving errors. Enabling robust perception for vehicles with ADAS requires solving multiple complex problems related to the selection and placement of sensors, object detection, and sensor fusion. Current methods address these problems in isolation, which leads to inefficient solutions. For instance, there is an inherent accuracy versus latency trade-off between one stage and two stage object detectors which makes selecting an enhanced object detector from a diverse range of choices difficult. Further, even if a perception architecture was equipped with an ideal object detector performing high accuracy and low latency inference, the relative position and orientation of selected sensors (e.g., cameras, radars, lidars) determine whether static or dynamic targets are inside the field of view of each sensor or in the combined field of view of the sensor configuration. If the combined field of view is too small or contains redundant overlap between individual sensors, important events and obstacles can go undetected. Conversely, if the combined field of view is too large, the number of false positive detections will be high in real time and appropriate sensor fusion algorithms are required for filtering. Sensor fusion algorithms also enable tracking of non-ego vehicles in situations where traffic is highly dynamic or there are many obstacles on the road. Position and velocity estimation using sensor fusion algorithms have a lower margin for error when trajectories of other vehicles in traffic are in the vicinity of the ego vehicle, as incorrect measurement can cause accidents. Due to the various complex inter-dependencies between design decisions, constraints and optimization goals a framework capable of synthesizing perception solutions for automotive cyber physical platforms is not trivial. We present a novel perception architecture exploration framework for automotive cyber- physical platforms capable of global co-optimization of deep learning and sensing infrastructure. The framework is capable of exploring the synthesis of heterogeneous sensor configurations towards achieving vehicle autonomy goals. As our first contribution, we propose a novel optimization framework called VESPA that explores the design space of sensor placement locations and orientations to find the optimal sensor configuration for a vehicle. We demonstrate how our framework can obtain optimal sensor configurations for heterogeneous sensors deployed across two contemporary real vehicles. We then utilize VESPA to create a comprehensive perception architecture synthesis framework called PASTA. This framework enables robust perception for vehicles with ADAS requiring solutions to multiple complex problems related not only to the selection and placement of sensors but also object detection, and sensor fusion as well. Experimental results with the Audi-TT and BMW Minicooper vehicles show how PASTA can intelligently traverse the perception design space to find robust, vehicle-specific solutions

    Scenario Generation for Testing of Automated Driving Functions based on Real Data

    Get PDF
    Scenario-based testing is state-of-the-art for testing Advanced Driving Assistance System / Autonomous Driving (ADAS/AD). The challenge in scenario-based testing is the generation and selection of the scenarios. To generate reproducible scenarios and to efficiently perform tests of ADAS/AD, simulation environments are used because the environment is under control. However, an open research question on this topic is the realism of the emerging scenarios within the simulation. Realism is a challenge because the ADAS/AD must eventually function in the real world. To solve this challenge, we contribute a concept (1) to use a simulation environment to generate realistic synthetic scenarios and (2) to evaluate their realism. We focus our research on dynamic objects within the scenarios. We parameterize the microscopic traffic simulation environment SUMO and generate synthetic scenarios by simulation. We base the evaluation of realism on real scenarios observed by the testbed Lower Saxony. To measure realism, we define ten different characteristics in different aspects. With these characteristics, we measure realism by comparing the characteristics against the real data. As a prototype, we implement this concept and compare three different methods of parameterization concerning their realism: (a) expert-based, (b) optimization-based, and (c) clustering-based. Based on our evaluation, we find that parameterization has a strong influence on the realism of criticality metrics such as the Time To Collision (TTC). In contrast, we find that the influence of parameterization on other aspects is comparatively low. We observe that realism depends on the parameterization and the capabilities of the simulation model. We discover that expert-based parameterization generates the most realistic scenes compared to the other methods and about 2.5 times as many realistic scenes during the same period as without parameterization. Each parameterization has its own strengths concerning different aspects of realism. We conclude that SUMO generates realistic dynamic objects in scenarios in many aspects

    Analysis of automotive camera sensor noise factors and impact on object detection

    Get PDF
    Assisted and automated driving functions are increasingly deployed to support improved safety, efficiency, and enhance driver experience. However, there are still key technical challenges that need to be overcome, such as the degradation of perception sensor data due to noise factors. The quality of data being generated by sensors can directly impact the planning and control of the vehicle, which can affect the vehicle safety. This work builds on a recently proposed framework, analysing noise factors on automotive LiDAR sensors, and deploys it to camera sensors, focusing on the specific disturbed sensor outputs via a detailed analysis and classification of automotive camera specific noise sources (30 noise factors are identified and classified in this work). Moreover, the noise factor analysis has identified two omnipresent and independent noise factors (i.e. obstruction and windshield distortion). These noise factors have been modelled to generate noisy camera data; their impact on the perception step, based on deep neural networks, has been evaluated when the noise factors are applied independently and simultaneously. It is demonstrated that the performance degradation from the combination of noise factors is not simply the accumulated performance degradation from each single factor, which raises the importance of including the simultaneous analysis of multiple noise factors. Thus, the framework can support and enhance the use of simulation for development and testing of automated vehicles through careful consideration of the noise factors affecting camera data
    corecore