957 research outputs found

    Between environmental perception and decision-making: compositional engineering of safe automated driving systems

    Get PDF
    Development of autonomous vehicles has hit a slump in the past years. This slump is caused by the so-called approval trap for autonomous vehicles: While the industry has mostly mastered the methods for building autonomous vehicles, reliable mechanisms for ensuring their safety are still missing. It is generally accepted that the brute-force approach of driving enough mileage for documenting the relatively higher safety of autonomous vehicles (compared to human drivers) is not feasible. Since, as of today, no alternative strategies for the safety approval of autonomous vehicles exist. One promising strategy is decomposition of safety validation into many sub-tasks with compositional sub-goals (akin to safety cases but for a vehicles intended functionality) for replacing mileage by combining validation tasks that together document safety. A prerequisite for this strategy is that the required performance of each component can be specified and shown. Specifying how accurate an environmental perception needs to be, however, is a non-trivial task. Whether perceptual inaccuracies, like a wrongly classified or missing object, also lead to hazardous behavior can only be evaluated when considering both the residual processing chain and the operational situation the autonomous vehicle is in. This thesis proposes a formal approach for the validation of perception components consisting of three consecutive steps: creation of a taxonomy regarding perception component inaccuracy, elicitation of verifiable requirements for perception components regarding these inaccuracies and evaluation of the elicited requirements. To that end, we firstly touch on the specification of perception errors and propose an approach to determine relevance of objects in urban areas. Secondly, we elicit verifiable perception requirements subject to a given decision-making module in different scenarios by structured testing in a simulation framework. Finally, we deal with the evaluation of perception components. This includes our approach for the generation of dimension and classification reference values and an exemplary evaluation of an object detection module regarding relevant errors and our previously elicited requirements. To the best of our knowledge, this is the first time that a coherent, formal approach for a decomposed safety validation of perception components is proposed and demonstrated. We conclude, that our contributions provide a novel perspective on the interface between perception and decision-making and thus further support the idea of a decomposed safety validation for automated driving systems

    On Statistical Methods for Safety Validation of Automated Vehicles

    Get PDF
    Automated vehicles (AVs) are expected to bring safer and more convenient transport in the future. Consequently, before introducing AVs at scale to the general public, the required levels of safety should be shown with evidence. However, statistical evidence generated by brute force testing using safety drivers in real traffic does not scale well. Therefore, more efficient methods are needed to evaluate if an AV exhibits acceptable levels of risk.This thesis studies the use of two methods to evaluate the AV\u27s safety performance efficiently. Both methods are based on assessing near-collision using threat metrics to estimate the frequency of actual collisions. The first method, called subset simulation, is here used to search the scenario parameter space in a simulation environment to estimate the probability of collision for an AV under development. More specifically, this thesis explores how the choice of threat metric, used to guide the search, affects the precision of the failure rate estimation. The result shows significant differences between the metrics and that some provide precise and accurate estimates.The second method is based on Extreme Value Theory (EVT), which is used to model the behavior of rare events. In this thesis, near-collision scenarios are identified using threat metrics and then extrapolated to estimate the frequency of actual collisions. The collision frequency estimates from different types of threat metrics are assessed when used with EVT for AV safety validation. Results show that a metric relating to the point where a collision is unavoidable works best and provides credible estimates. In addition, this thesis proposes how EVT and threat metrics can be used as a proactive safety monitor for AVs deployed in real traffic. The concept is evaluated in a fictive development case and compared to a reactive approach of counting the actual events. It is found that the risk exposure of releasing a non-safe function can be significantly reduced by applying the proposed EVT monitor

    Enhancing Real-time Embedded Image Processing Robustness on Reconfigurable Devices for Critical Applications

    Get PDF
    Nowadays, image processing is increasingly used in several application fields, such as biomedical, aerospace, or automotive. Within these fields, image processing is used to serve both non-critical and critical tasks. As example, in automotive, cameras are becoming key sensors in increasing car safety, driving assistance and driving comfort. They have been employed for infotainment (non-critical), as well as for some driver assistance tasks (critical), such as Forward Collision Avoidance, Intelligent Speed Control, or Pedestrian Detection. The complexity of these algorithms brings a challenge in real-time image processing systems, requiring high computing capacity, usually not available in processors for embedded systems. Hardware acceleration is therefore crucial, and devices such as Field Programmable Gate Arrays (FPGAs) best fit the growing demand of computational capabilities. These devices can assist embedded processors by significantly speeding-up computationally intensive software algorithms. Moreover, critical applications introduce strict requirements not only from the real-time constraints, but also from the device reliability and algorithm robustness points of view. Technology scaling is highlighting reliability problems related to aging phenomena, and to the increasing sensitivity of digital devices to external radiation events that can cause transient or even permanent faults. These faults can lead to wrong information processed or, in the worst case, to a dangerous system failure. In this context, the reconfigurable nature of FPGA devices can be exploited to increase the system reliability and robustness by leveraging Dynamic Partial Reconfiguration features. The research work presented in this thesis focuses on the development of techniques for implementing efficient and robust real-time embedded image processing hardware accelerators and systems for mission-critical applications. Three main challenges have been faced and will be discussed, along with proposed solutions, throughout the thesis: (i) achieving real-time performances, (ii) enhancing algorithm robustness, and (iii) increasing overall system's dependability. In order to ensure real-time performances, efficient FPGA-based hardware accelerators implementing selected image processing algorithms have been developed. Functionalities offered by the target technology, and algorithm's characteristics have been constantly taken into account while designing such accelerators, in order to efficiently tailor algorithm's operations to available hardware resources. On the other hand, the key idea for increasing image processing algorithms' robustness is to introduce self-adaptivity features at algorithm level, in order to maintain constant, or improve, the quality of results for a wide range of input conditions, that are not always fully predictable at design-time (e.g., noise level variations). This has been accomplished by measuring at run-time some characteristics of the input images, and then tuning the algorithm parameters based on such estimations. Dynamic reconfiguration features of modern reconfigurable FPGA have been extensively exploited in order to integrate run-time adaptivity into the designed hardware accelerators. Tools and methodologies have been also developed in order to increase the overall system dependability during reconfiguration processes, thus providing safe run-time adaptation mechanisms. In addition, taking into account the target technology and the environments in which the developed hardware accelerators and systems may be employed, dependability issues have been analyzed, leading to the development of a platform for quickly assessing the reliability and characterizing the behavior of hardware accelerators implemented on reconfigurable FPGAs when they are affected by such faults

    Measurable Safety of Automated Driving Functions in Commercial Motor Vehicles

    Get PDF
    With the further development of automated driving, the functional performance increases resulting in the need for new and comprehensive testing concepts. This doctoral work aims to enable the transition from quantitative mileage to qualitative test coverage by aggregating the results of both knowledge-based and data-driven test platforms. The validity of the test domain can be extended cost-effectively throughout the software development process to achieve meaningful test termination criteria

    Measurable Safety of Automated Driving Functions in Commercial Motor Vehicles - Technological and Methodical Approaches

    Get PDF
    Fahrerassistenzsysteme sowie automatisiertes Fahren leisten einen wesentlichen Beitrag zur Verbesserung der Verkehrssicherheit von Kraftfahrzeugen, insbesondere von Nutzfahrzeugen. Mit der Weiterentwicklung des automatisierten Fahrens steigt hierbei die funktionale Leistungsfähigkeit, woraus Anforderungen an neue, gesamtheitliche Erprobungskonzepte entstehen. Um die Absicherung höherer Stufen von automatisierten Fahrfunktionen zu garantieren, sind neuartige Verifikations- und Validierungsmethoden erforderlich. Ziel dieser Arbeit ist es, durch die Aggregation von Testergebnissen aus wissensbasierten und datengetriebenen Testplattformen den Übergang von einer quantitativen Kilometerzahl zu einer qualitativen Testabdeckung zu ermöglichen. Die adaptive Testabdeckung zielt somit auf einen Kompromiss zwischen Effizienz- und Effektivitätskriterien für die Absicherung von automatisierten Fahrfunktionen in der Produktentstehung von Nutzfahrzeugen ab. Diese Arbeit umfasst die Konzeption und Implementierung eines modularen Frameworks zur kundenorientierten Absicherung automatisierter Fahrfunktionen mit vertretbarem Aufwand. Ausgehend vom Konfliktmanagement für die Anforderungen der Teststrategie werden hochautomatisierte Testansätze entwickelt. Dementsprechend wird jeder Testansatz mit seinen jeweiligen Testzielen integriert, um die Basis eines kontextgesteuerten Testkonzepts zu realisieren. Die wesentlichen Beiträge dieser Arbeit befassen sich mit vier Schwerpunkten: * Zunächst wird ein Co-Simulationsansatz präsentiert, mit dem sich die Sensoreingänge in einem Hardware-in-the-Loop-Prüfstand mithilfe synthetischer Fahrszenarien simulieren und/ oder stimulieren lassen. Der vorgestellte Aufbau bietet einen phänomenologischen Modellierungsansatz, um einen Kompromiss zwischen der Modellgranularität und dem Rechenaufwand der Echtzeitsimulation zu erreichen. Diese Methode wird für eine modulare Integration von Simulationskomponenten, wie Verkehrssimulation und Fahrdynamik, verwendet, um relevante Phänomene in kritischen Fahrszenarien zu modellieren. * Danach wird ein Messtechnik- und Datenanalysekonzept für die weltweite Absicherung von automatisierten Fahrfunktionen vorgestellt, welches eine Skalierbarkeit zur Aufzeichnung von Fahrzeugsensor- und/ oder Umfeldsensordaten von spezifischen Fahrereignissen einerseits und permanenten Daten zur statistischen Absicherung und Softwareentwicklung andererseits erlaubt. Messdaten aus länderspezifischen Feldversuchen werden aufgezeichnet und zentral in einer Cloud-Datenbank gespeichert. * Anschließend wird ein ontologiebasierter Ansatz zur Integration einer komplementären Wissensquelle aus Feldbeobachtungen in ein Wissensmanagementsystem beschrieben. Die Gruppierung von Aufzeichnungen wird mittels einer ereignisbasierten Zeitreihenanalyse mit hierarchischer Clusterbildung und normalisierter Kreuzkorrelation realisiert. Aus dem extrahierten Cluster und seinem Parameterraum lassen sich die Eintrittswahrscheinlichkeit jedes logischen Szenarios und die Wahrscheinlichkeitsverteilungen der zugehörigen Parameter ableiten. Durch die Korrelationsanalyse von synthetischen und naturalistischen Fahrszenarien wird die anforderungsbasierte Testabdeckung adaptiv und systematisch durch ausführbare Szenario-Spezifikationen erweitert. * Schließlich wird eine prospektive Risikobewertung als invertiertes Konfidenzniveau der messbaren Sicherheit mithilfe von Sensitivitäts- und Zuverlässigkeitsanalysen durchgeführt. Der Versagensbereich kann im Parameterraum identifiziert werden, um die Versagenswahrscheinlichkeit für jedes extrahierte logische Szenario durch verschiedene Stichprobenverfahren, wie beispielsweise die Monte-Carlo-Simulation und Adaptive-Importance-Sampling, vorherzusagen. Dabei führt die geschätzte Wahrscheinlichkeit einer Sicherheitsverletzung für jedes gruppierte logische Szenario zu einer messbaren Sicherheitsvorhersage. Das vorgestellte Framework erlaubt es, die Lücke zwischen wissensbasierten und datengetriebenen Testplattformen zu schließen, um die Wissensbasis für die Abdeckung der Operational Design Domains konsequent zu erweitern. Zusammenfassend zeigen die Ergebnisse den Nutzen und die Herausforderungen des entwickelten Frameworks für messbare Sicherheit durch ein Vertrauensmaß der Risikobewertung. Dies ermöglicht eine kosteneffiziente Erweiterung der Validität der Testdomäne im gesamten Softwareentwicklungsprozess, um die erforderlichen Testabbruchkriterien zu erreichen

    Emerging research directions in computer science : contributions from the young informatics faculty in Karlsruhe

    Get PDF
    In order to build better human-friendly human-computer interfaces, such interfaces need to be enabled with capabilities to perceive the user, his location, identity, activities and in particular his interaction with others and the machine. Only with these perception capabilities can smart systems ( for example human-friendly robots or smart environments) become posssible. In my research I\u27m thus focusing on the development of novel techniques for the visual perception of humans and their activities, in order to facilitate perceptive multimodal interfaces, humanoid robots and smart environments. My work includes research on person tracking, person identication, recognition of pointing gestures, estimation of head orientation and focus of attention, as well as audio-visual scene and activity analysis. Application areas are humanfriendly humanoid robots, smart environments, content-based image and video analysis, as well as safety- and security-related applications. This article gives a brief overview of my ongoing research activities in these areas

    Measurable Safety of Automated Driving Functions in Commercial Motor Vehicles

    Get PDF
    With the further development of automated driving, the functional performance increases resulting in the need for new and comprehensive testing concepts. This doctoral work aims to enable the transition from quantitative mileage to qualitative test coverage by aggregating the results of both knowledge-based and data-driven test platforms. The validity of the test domain can be extended cost-effectively throughout the software development process to achieve meaningful test termination criteria

    Survivability modeling for cyber-physical systems subject to data corruption

    Get PDF
    Cyber-physical critical infrastructures are created when traditional physical infrastructure is supplemented with advanced monitoring, control, computing, and communication capability. More intelligent decision support and improved efficacy, dependability, and security are expected. Quantitative models and evaluation methods are required for determining the extent to which a cyber-physical infrastructure improves on its physical predecessors. It is essential that these models reflect both cyber and physical aspects of operation and failure. In this dissertation, we propose quantitative models for dependability attributes, in particular, survivability, of cyber-physical systems. Any malfunction or security breach, whether cyber or physical, that causes the system operation to depart from specifications will affect these dependability attributes. Our focus is on data corruption, which compromises decision support -- the fundamental role played by cyber infrastructure. The first research contribution of this work is a Petri net model for information exchange in cyber-physical systems, which facilitates i) evaluation of the extent of data corruption at a given time, and ii) illuminates the service degradation caused by propagation of corrupt data through the cyber infrastructure. In the second research contribution, we propose metrics and an evaluation method for survivability, which captures the extent of functionality retained by a system after a disruptive event. We illustrate the application of our methods through case studies on smart grids, intelligent water distribution networks, and intelligent transportation systems. Data, cyber infrastructure, and intelligent control are part and parcel of nearly every critical infrastructure that underpins daily life in developed countries. Our work provides means for quantifying and predicting the service degradation caused when cyber infrastructure fails to serve its intended purpose. It can also serve as the foundation for efforts to fortify critical systems and mitigate inevitable failures --Abstract, page iii

    Automotive Intelligence Embedded in Electric Connected Autonomous and Shared Vehicles Technology for Sustainable Green Mobility

    Get PDF
    The automotive sector digitalization accelerates the technology convergence of perception, computing processing, connectivity, propulsion, and data fusion for electric connected autonomous and shared (ECAS) vehicles. This brings cutting-edge computing paradigms with embedded cognitive capabilities into vehicle domains and data infrastructure to provide holistic intrinsic and extrinsic intelligence for new mobility applications. Digital technologies are a significant enabler in achieving the sustainability goals of the green transformation of the mobility and transportation sectors. Innovation occurs predominantly in ECAS vehicles’ architecture, operations, intelligent functions, and automotive digital infrastructure. The traditional ownership model is moving toward multimodal and shared mobility services. The ECAS vehicle’s technology allows for the development of virtual automotive functions that run on shared hardware platforms with data unlocking value, and for introducing new, shared computing-based automotive features. Facilitating vehicle automation, vehicle electrification, vehicle-to-everything (V2X) communication is accomplished by the convergence of artificial intelligence (AI), cellular/wireless connectivity, edge computing, the Internet of things (IoT), the Internet of intelligent things (IoIT), digital twins (DTs), virtual/augmented reality (VR/AR) and distributed ledger technologies (DLTs). Vehicles become more intelligent, connected, functioning as edge micro servers on wheels, powered by sensors/actuators, hardware (HW), software (SW) and smart virtual functions that are integrated into the digital infrastructure. Electrification, automation, connectivity, digitalization, decarbonization, decentralization, and standardization are the main drivers that unlock intelligent vehicles' potential for sustainable green mobility applications. ECAS vehicles act as autonomous agents using swarm intelligence to communicate and exchange information, either directly or indirectly, with each other and the infrastructure, accessing independent services such as energy, high-definition maps, routes, infrastructure information, traffic lights, tolls, parking (micropayments), and finding emergent/intelligent solutions. The article gives an overview of the advances in AI technologies and applications to realize intelligent functions and optimize vehicle performance, control, and decision-making for future ECAS vehicles to support the acceleration of deployment in various mobility scenarios. ECAS vehicles, systems, sub-systems, and components are subjected to stringent regulatory frameworks, which set rigorous requirements for autonomous vehicles. An in-depth assessment of existing standards, regulations, and laws, including a thorough gap analysis, is required. Global guidelines must be provided on how to fulfill the requirements. ECAS vehicle technology trustworthiness, including AI-based HW/SW and algorithms, is necessary for developing ECAS systems across the entire automotive ecosystem. The safety and transparency of AI-based technology and the explainability of the purpose, use, benefits, and limitations of AI systems are critical for fulfilling trustworthiness requirements. The article presents ECAS vehicles’ evolution toward domain controller, zonal vehicle, and federated vehicle/edge/cloud-centric based on distributed intelligence in the vehicle and infrastructure level architectures and the role of AI techniques and methods to implement the different autonomous driving and optimization functions for sustainable green mobility.publishedVersio
    • …
    corecore