18 research outputs found

    Fundamental Design Criteria for Logical Scenarios in Simulation-based Safety Validation of Automated Driving Using Sensor Model Knowledge

    Get PDF
    Scenario-based virtual validation of automated driving functions is a promising method to reduce testing effort in real traffic. In this work, a method for deriving scenario design criteria from a sensor modeling point of view is proposed. Using basic sensor technology specific equations as rough but effective boundary conditions, the accessible information for the system under test are determined. Subsequently, initial conditions such as initial poses of dynamic objects are calculated using the derived boundary conditions for designing logical scenarios. Further interest is given on triggers starting movements of objects during scenarios that are not time but object dependent. The approach is demonstrated on the example of the radar equation and first exemplary results by identifying relevance regions are shown

    Towards a Generally Accepted Validation Methodology for Sensor Models - Challenges, Metrics, and First Results

    Get PDF
    In order to significantly reduce the testing effort of autonomous vehicles, simulation-based testing in combination with a scenario-based approach is a major part of the overall test concept. But, for sophisticated simulations, all applied models have to be validated beforehand, which is the focus of this paper. The presented validation methodology for sensor system simulation is based on a state-of-the-art analysis and the derived necessary improvements. The lack of experience in formulating requirements and providing adequate metrics for their usage in sensor model validation, in contrast to e.g. vehicle dynamics simulation, is addressed. Additionally, the importance of valid measurement and reference data is pointed out and especially the challenges of repeatability and reproducibility of trajectories and measurements of perception sensors in dynamic multi-object scenarios are shown. The process to find relevant scenarios and the resulting parameter space to be examined is described. At the example of lidar point clouds, the derivation of metrics with respect to the requirements is explained and exemplary evaluation results are summarized. Based on this, extensions to the state-of-the-art model validation method are provided

    Entwicklung eines Radar-Sensormodells

    Get PDF
    In dieser Arbeit wird ein Radar-Sensormodell vorgestellt, das auf einem Fouriertracing-Ansatz basiert. Um die Radarstrahlung im Frequenzbereich zu simulieren, wird ein Raytracing-Verfahren abgewandelt, sodass Radarmessdaten, wie Entfernung, Relativgeschwindigkeit, Azimutwinkel und Leistung, am Ausgang des Raytracers anliegen. Anschließend werden mit diesen Daten die Eigenschaften einer Schnellen Fouriertransformation (FFT) simuliert. Das Radarmodell wird in Vires Virtual Test Drive (VTD) implementiert. Nach einer Einführung in die Theorie der Radartechnologie und des Raytracings wird ein einfaches Radarmodell mit Fouriertracing vorgestellt. Da das Signal beim Fouriertracing im Frequenzbereich simuliert wird, werden verschiedene Charakteristiken einer FFT, wie die Anwendung von Fensterfunktionen oder durch einen großen Messbereich auftretende Ambiguitäten, berücksichtigt. Außerdem werden Leistungsdämpfungen, z.B. durch das Entfernungsgesetz oder die Antennencharakteristik, implementiert. Durch weitere Anpassung des Raytracers wird die Mehrwegeausbreitung der Radarstrahlen umgesetzt. So werden z.B. Bodenreflexionen berücksichtigt und Ziele, die nicht im direkten Sichtbereich des Radars liegen, von der Strahlung erfasst. In einem weiteren Schritt wird die Rückstrahlleistung bei der Mehrwegeausbreitung genauer betrachtet. Dazu wird zunächst ein Reflektivitätsmodell für Asphalt implementiert. Für die Optimierung spekularer Reflexionen an metallischen Oberflächen wird ein Optimierungsverfahren eingesetzt, das die zu einem Strahl äqui- valente Fläche nach einem idealen Reflexionspunkt absucht. Jeder Schritt der Modellbildung wird in VTD implementiert und einzeln mit realen Sensordaten verifiziert. Abschließend wird der am Ausgang des Gesamtmodells anliegende Radarwürfel durch Simulation typischer Verkehrsszenarien mit Messdaten der gleichen Szenarien verglichen. So wird das Radarmodell validiert und die Grenzen der Modellbildung aufgeführt

    Analysis of Environmental Influences for Simulation of Active Perception Sensors

    Get PDF
    Automated vehicles have an inherent safety risk for passengers and other traffic participant. Rigorous testing and safeguarding is needed for approving their operation on public roads. But testing in the real world is not only very time-consuming and expensive but also quite dangerous to participants and the engineers. Therefore more and more tests are relocated to the virtual world before they are performed on proving grounds and eventually in real traffic. In the real world, perception sensors of automated vehicles are subjected to a variety of adverse environmental conditions, such as fog, rain, snow, glaring sun light or road spray from other vehicles. As previous research already showed a severe impact on perception sensors, especially lidar, these influences need to be accurately represented in the virtual world and in models of the perception sensors. To systematically quantify the influence of the named conditions, they are first sorted into two main categories of object independent conditions, such as fog, rain, snow, and object dependent conditions, like wet pavement and road spray. For the first category, measurements in a stationary setup are recorded over a period of six months. Multiple lidar sensors with additional reference sensors for rain rate, temperature, sun brightness, visibility etc. deliver the data in this experiment. The measurements are sorted according to the weather condition and lidar values like the number of detections in the atmosphere are aggregated. This yields expectation values with respect to quantified environmental conditions. As a prominent example for object dependent conditions, road spray is examined in a second experimental setup. Measurements are taken with objects driving over artificially watered pavement on a proving ground. Water film and object velocities are systematically varied between experiment repetitions. The most prominent phenomenon in the recorded data is clustering of detections in the spray plume due to the turbulent nature of the spray. The clustering as well as detection probabilities within these clusters are used as expectation values for modeling. The gathered expectation values are then utilized to develop stochastic simulation models. The models are integrated into an lidar base model in a modular approach compliant to the Open Simulation Interface. The two main modeling approaches are adding false-positive atmospheric detections and attenuating or removing detections generated by the base model. Finally, the gained experiences from the measurements and model development are used to derive requirements for ground truth data quantifying the environmental conditions. The specified ground truth data serves as input to the simulation models

    Refining Object-Based Lidar Sensor Modeling — Challenging Ray Tracing as the Magic Bullet

    No full text
    Sensor and perception simulation is key for simulation based testing of automated driving functions. Depending on the testing use-case, different cause-effect chains for the specific sensor technology become relevant and the demanded computation times differ. In this work, a novel approach for object-based lidar simulation is introduced, identifying and modeling major sensor effects while balancing effect fidelity and computation time. With an explicitly designed static experiment, simple object based models are falsified, showing the need of a novel approach. Therefore, refined bounding boxes are designed and integrated into occlusion calculation. This new approach is compared to an advanced ray tracing simulation on an object output level. The comparison is conducted on the cause-effect chain of partial object occlusion, which has been identified as highly relevant. The modeling approach challenges ray tracing as the magic bullet for high-fidelity lidar front-end simulation. In direct comparison with the ray tracing approach, our novel approach stands out due to its significantly lower computation time. The newly developed object-based model is open source and publicly available at https://gitlab.com/tuda-fzd/perception-sensor-modeling/object-based-generic-perception-object-model

    Road Spray in Lidar and Radar Data for Individual Moving Objects

    No full text
    Simulation-based testing supports the challenging task of safety validation of automated driving functions. Virtual testing always entails the modeling of automotive perception sensors and their environment. In the real world, these sensors are not only exposed to weather conditions like rain, fog, snow etc., but environmental influences also appear locally. Road spray is one of the more challenging occurrences, because it involves other moving objects in the scenario. This data set is designed to systematically analyze the influence of road spray on lidar and radar sensors. It consists of sensor measurements of two vehicle classes driving over asphalt with three water levels to differentiate multiple influence factors

    Environmental Conditions in Lidar and Radar Data

    No full text
    Safety validation of automated driving functions is a major challenge that is partly tackled by means of simulation-based testing. The virtual validation approach always entails the modeling of automotive perception sensors and their environment. In the real world, these sensors are exposed to adverse influences by environmental conditions like rain, fog, snow etc. Therefore, such influences need to be reflected in the simulation models. In this publication, a novel data set is introduced. This data set contains lidar data with synchronized reference measurements of weather conditions from a stationary long-term experiment. Recorded weather conditions comprise fog, rain, snow and direct sun light. Next to the named funding projects, the dataset was also funded by VIVID, promoted by the German Federal Ministry of Education and Research, based on a decision of the Deutsche Bundestag

    Multi-Session Visual Roadway Mapping

    Get PDF
    This paper proposes an algorithm for camera based roadway mapping in urban areas. With a convolutional neural network the roadway is detected in images taken by a camera mounted in the vehicle. The detected roadway masks from all images of one driving session are combined according to their corresponding GPS position to create a probabilistic grid map of the roadway. Finally, maps from several driving sessions are merged by a feature matching algorithm to compensate for errors in the roadway detection and localization inaccuracies. Hence, this approach utilizes solely low-cost sensors common in usual production vehicles and can generate highly detailed roadway maps from crowd-sourced data

    Measuring the Influence of Environmental Conditions on Automotive Lidar Sensors

    No full text
    Safety validation of automated driving functions is a major challenge that is partly tackled by means of simulation-based testing. The virtual validation approach always entails the modeling of automotive perception sensors and their environment. In the real world, these sensors are exposed to adverse influences by environmental conditions such as rain, fog, snow, etc. Therefore, such influences need to be reflected in the simulation models. In this publication, a novel data set is introduced and analyzed. This data set contains lidar data with synchronized reference measurements of weather conditions from a stationary long-term experiment. Recorded weather conditions comprise fog, rain, snow, and direct sunlight. The data are analyzed by pairing lidar values, such as the number of detections in the atmosphere, with weather parameters such as rain rate in mm/h. This results in expectation values, which can directly be utilized for stochastic modeling or model calibration and validation. The results show vast differences in the number of atmospheric detections, range distribution, and attenuation between the different sensors of the data set
    corecore