50 research outputs found

    Assessing the Adherence of an Industrial Autonomous Driving Framework to ISO 26262 Software Guidelines

    Get PDF
    The complexity and size of Autonomous Driving (AD) software are comparably higher than that of software implementing other (standard) functionalities in the car. To make things worse, a big fraction of AD software is not specifically designed for the automotive (or any other critical) domain, but the mainstream market. This brings uncertainty on to which extent AD software adheres to guidelines in safety standards. In this paper, we present our experience in applying ISO 26262 -- the applicable functional safety standard for road vehicles -- software safety guidelines to industrial AD software, in particular, Apollo, a heterogeneous Autonomous Driving framework used extensively in industry. We provide quantitative and qualitative metrics of compliance for many ISO 26262 recommendations on software design, implementation, and testing.This work has received funding from the European Research Coun-cil (ERC) under the European Union’s Horizon 2020 research andinnovation programme (grant agreement No. 772773). This workhas also been partially supported by the Spanish Ministry of Econ-omy and Competitiveness (MINECO) under grant TIN2015-65316-Pand the HiPEAC Network of Excellence. MINECO partially sup-ported Jaume Abella under Ramon y Cajal postdoctoral fellowship(RYC-2013-14717), and Leonidas Kosmidis under Juan de la Cierva-Formación postdoctoral fellowship (FJCI-2017-34095).Peer ReviewedPostprint (published version

    SAFEXPLAIN: Safe and Explainable Critical Embedded Systems Based on AI

    Get PDF
    Deep Learning (DL) techniques are at the heart of most future advanced software functions in Critical Autonomous AI-based Systems (CAIS), where they also represent a major competitive factor. Hence, the economic success of CAIS industries (e.g., automotive, space, railway) depends on their ability to design, implement, qualify, and certify DL-based software products under bounded effort/cost. However, there is a fundamental gap between Functional Safety (FUSA) requirements on CAIS and the nature of DL solutions. This gap stems from the development process of DL libraries and affects high-level safety concepts such as (1) explainability and traceability, (2) suitability for varying safety requirements, (3) FUSA-compliant implementations, and (4) real-time constraints. As a matter of fact, the data-dependent and stochastic nature of DL algorithms clashes with current FUSA practice, which instead builds on deterministic, verifiable, and pass/fail test-based software. The SAFEXPLAIN project tackles these challenges and targets by providing a flexible approach to allow the certification - hence adoption - of DL-based solutions in CAIS building on: (1) DL solutions that provide end-to-end traceability, with specific approaches to explain whether predictions can be trusted and strategies to reach (and prove) correct operation, in accordance to certification standards; (2) alternative and increasingly sophisticated design safety patterns for DL with varying criticality and fault tolerance requirements; (3) DL library implementations that adhere to safety requirements; and (4) computing platform configurations, to regain determinism, and probabilistic timing analyses, to handle the remaining non-determinism.The research leading to these results has received funding from the Horizon Europe Programme under the SAFEXPLAIN Project (www.safexplain.eu), grant agreement num. 101069595. BSC authors have also been supported by the Spanish Ministry of Science and Innovation under grant PID2019- 107255GBC21/AEI/10.13039/501100011033.Peer Reviewed"Article signat per 22 autors/es: Jaume Abella, Jon Perez, Cristofer Englund, Bahram Zonooz, Gabriele Giordana, Carlo Donzella, Francisco J. Cazorla, Enrico Mezzetti, Isabel Serra, Axel Brando, Irune Agirre, Fernando Eizaguirre, Thanh Hai Bui, Elahe Arani, Fahad Sarfraz, Ajay Balasubramaniam, Ahmed BadarIlaria Bloise, Lorenzo Feruglio, Ilaria Cinelli, Davide Brighenti, Davide Cunial"Postprint (author's final draft

    En-route: on enabling resource usage testing for autonomous driving frameworks

    Get PDF
    Software resource usage testing, including execution time bounds and memory, is a mandatory validation step during the integration of safety-related real-time systems. However, the inherent complexity of Autonomous Driving (AD) systems challenges current practice for resource usage testing. This paper exposes the difficulties to perform resource usage testing for AD frameworks by analyzing a complex and critical module of an AD framework, and provides some guidelines and practical evidence on how resource usage testing can be effectively performed, thus enabling end users to validate their safety-related real-time AD frameworks.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P, the UP2DATE European Union’s Horizon 2020 (H2020) research and innovation programme under grant agreement No 871465, and the HiPEAC Network of Excellence. MINECO partially supported Jaume Abella under Ramon y Cajal postdoctoral fellowship (RYC-2013-14717) and Leonidas Kosmidis under Juan de la Cierva-Formación postdoctoral fellowship (FJCI-2017-34095)Peer ReviewedPostprint (author's final draft

    On Provably Correct Decision-Making for Automated Driving

    Get PDF
    The introduction of driving automation in road vehicles can potentially reduce road traffic crashes and significantly improve road safety. Automation in road vehicles also brings several other benefits such as the possibility to provide independent mobility for people who cannot and/or should not drive. Many different hardware and software components (e.g. sensing, decision-making, actuation, and control) interact to solve the autonomous driving task. Correctness of such automated driving systems is crucial as incorrect behaviour may have catastrophic consequences. Autonomous vehicles operate in complex and dynamic environments, which requires decision-making and planning at different levels. The aim of such decision-making components in these systems is to make safe decisions at all times. The challenge of safety verification of these systems is crucial for the commercial deployment of full autonomy in vehicles. Testing for safety is expensive, impractical, and can never guarantee the absence of errors. In contrast, formal methods, which are techniques that use rigorous mathematical models to build hardware and software systems can provide a mathematical proof of the correctness of the system. The focus of this thesis is to address some of the challenges in the safety verification of decision-making in automated driving systems. A central question here is how to establish formal verification as an efficient tool for automated driving software development.A key finding is the need for an integrated formal approach to prove correctness and to provide a complete safety argument. This thesis provides insights into how three different formal verification approaches, namely supervisory control theory, model checking, and deductive verification differ in their application to automated driving and identifies the challenges associated with each method. It identifies the need for the introduction of more rigour in the requirement refinement process and presents one possible solution by using a formal model-based safety analysis approach. To address challenges in the manual modelling process, a possible solution by automatically learning formal models directly from code is proposed

    Timing of autonomous driving software: problem analysis and prospects for future solutions

    Get PDF
    The software used to implement advanced functionalities in critical domains (e.g. autonomous operation) impairs software timing. This is not only due to the complexity of the underlying high-performance hardware deployed to provide the required levels of computing performance, but also due to the complexity, non-deterministic nature, and huge input space of the artificial intelligence (AI) algorithms used. In this paper, we focus on Apollo, an industrial-quality Autonomous Driving (AD) software framework: we statistically characterize its observed execution time variability and reason on the sources behind it. We discuss the main challenges and limitations in finding a satisfactory software timing analysis solution for Apollo and also show the main traits for the acceptability of statistical timing analysis techniques as a feasible path. While providing a consolidated solution for the software timing analysis of Apollo is a huge effort far beyond the scope of a single research paper, our work aims to set the basis for future and more elaborated techniques for the timing analysis of AD software.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P, the SuPerCom European Research Council (ERC) project under the European Union’s Horizon 2020 research and innovation programme (grant agreement No.772773), and the HiPEAC Network of Excellence. MINECO partially supported Enrico Mezzetti under Juan de la Cierva-Incorporación postdoctoral fellowship (IJCI-2016-27396), and Leonidas Kosmidis under Juan de la Cierva-Formación postdoctoral fellowship (FJCI-2017-34095).Peer ReviewedPostprint (author's final draft

    Performance analysis and optimization of automotive GPUs

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Advanced Driver Assistance Systems (ADAS) and Autonomous Driving (AD) have drastically increased the performance demands of automotive systems. Suitable highperformance platforms building upon Graphic Processing Units (GPUs) have been developed to respond to this demand, being NVIDIA Jetson TX2 a relevant representative. However, whether high-performance GPU configurations are appropriate for automotive setups remains as an open question. This paper aims at providing light on this question by modelling an automotive GPU (Jetson TX2), analyzing its microarchitectural parameters against relevant benchmarks, and identifying specific configurations able to meaningfully increase performance within similar cost envelopes, or to decrease costs preserving original performance levels. Overall, our analysis opens the door to the optimization of automotive GPUs for further system efficiency.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P, the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 772773) and the HiPEAC Network of Excellence. Pedro Benedicte and Jaume Abella have been partially supported by the MINECO under FPU15/01394 grant and Ramon y Cajal postdoctoral fellowship number RYC-2013-14717 respectively and Leonidas Kosmidis under Juan de la Cierva-Formacin postdoctoral fellowship (FJCI-2017-34095).Peer ReviewedPostprint (author's final draft

    Novel Validation Techniques for Autonomous Vehicles

    Get PDF
    The automotive industry is facing challenges in producing electrical, connected, and autonomous vehicles. Even if these challenges are, from a technical point of view, independent from each other, the market and regulatory bodies require them to be developed and integrated simultaneously. The development of autonomous vehicles implies the development of highly dependable systems. This is a multidisciplinary activity involving knowledge from robotics, computer science, electrical and mechanical engineering, psychology, social studies, and ethics. Nowadays, many Advanced Driver Assistance Systems (ADAS), like Emergency Braking System, Lane Keep Assistant, and Park Assist, are available. Newer luxury cars can drive by themselves on highways or park automatically, but the end goal is to develop completely autonomous driving vehicles, able to go by themselves, without needing human interventions in any situation. The more vehicles become autonomous, the greater the difficulty in keeping them reliable. It enhances the challenges in terms of development processes since their misbehaviors can lead to catastrophic consequences and, differently from the past, there is no more a human driver to mitigate the effects of erroneous behaviors. Primary threats to dependability come from three sources: misuse from the drivers, design systematic errors, and random hardware failures. These safety threats are addressed under various aspects, considering the particular type of item to be designed. In particular, for the sake of this work, we analyze those related to Functional Safety (FuSa), viewed as the ability of a system to react on time and in the proper way to the external environment. From the technological point of view, these behaviors are implemented by electrical and electronic items. Various standards to achieve FuSa have been released over the years. The first, released in 1998, was the IEC 61508. Its last version is the one released in 2010. This standard defines mainly: • a Functional Safety Management System (FSMS); • methods to determine a Safety Integrated Level (SIL); • methods to determine the probability of failures. To adapt the IEC61508 to the automotive industry’s peculiarity, a newer standard, the ISO26262, was released in 2011 then updated in 2018. This standard provides guidelines about FSMS, called in this case Safety Lifecycle, describing how to develop software and hardware components suitable for functional safety. It also provides a different way to compute the SIL, called in this case Automotive SIL (ASIL), allowing us to consider the average driver’s abilities to control the vehicle in case of failures. Moreover, it describes a way to determine the probability of random hardware failures through Failure Mode, Effects, and Diagnostic Analysis (FMEDA). This dissertation contains contributions to three topics: • random hardware failures mitigation; • improvementoftheISO26262HazardAnalysisandRiskAssessment(HARA); • real-time verification of the embedded software. As the main contribution of this dissertation, I address the safety threats due to random hardware failures (RHFs). For this purpose, I propose a novel simulation-based approach to aid the Failure Mode, Effects, and Diagnostic Analysis (FMEDA) required by the ISO26262 standard. Thanks to a SPICE-level model of the item, and the adoption of fault injection techniques, it is possible to simulate its behaviors obtaining useful information to classify the various failure modes. The proposed approach evolved from a mere simulation of the item, allowing only an item-level failure mode classification up to a vehicle-level analysis. The propagation of the failure modes’ effects on the whole vehicle enables us to assess the impacts on the vehicle’s drivability, improving the quality of the classifications. It can be advantageous where it is difficult to predict how the item-level misbehaviors propagate to the vehicle level, as in the case of a virtual differential gear or the mobility system of a robot. It has been chosen since it can be considered similar to the novel light vehicles, such as electric scooters, that are becoming more and more popular. Moreover, my research group has complete access to its design since it is realized by our university’s DIANA students’ team. When a SPICE-level simulation is too long to be performed, or it is not possible to develop a complete model of the item due to intellectual property protection rules, it is possible to aid this process through behavioral models of the item. A simulation of this kind has been performed on a mobile robotic system. Behavioral models of the electronic components were used, alongside mechanical simulations, to assess the software failure mitigation capabilities. Another contribution has been obtained by modifying the main one. The idea was to make it possible to aid also the Hazard Analysis and Risk Assessment (HARA). This assessment is performed during the concept phase, so before starting to design the item implementation. Its goal is to determine the hazards involved in the item functionality and their associated levels of risk. The end goal of this phase is a list of safety goals. For each one of these safety goals, an ASIL has to be determined. Since HARA relies only on designers expertise and knowledge, it lacks in objectivity and repeatability. Thanks to the simulation results, it is possible to predict the effects of the failures on the vehicle’s drivability, allowing us to improve the severity and controllability assessment, thus improving the objectivity. Moreover, since simulation conditions can be stored, it is possible, at any time, to recheck the results and to add new scenarios, improving the repeatability. The third group of contributions is about the real-time verification of embedded software. Through Hardware-In-the-Loop (HIL), a software integration verification has been performed to test a fundamental automotive component, mixed-criticality applications, and multi-agent robots. The first of these contributions is about real-time tests on Body Control Modules (BCM). These modules manage various electronic accessories in the vehicle’s body, like power windows and mirrors, air conditioning, immobilizer, central locking. The main characteristics of BCMs are the communications with other embedded computers via the car’s vehicle bus (Controller Area Network) and to have a high number (hundreds) of low-speed I/Os. As the second contribution, I propose a methodology to assess the error recovery system’s effects on mixed-criticality applications regarding deadline misses. The system runs two tasks: a critical airplane longitudinal control and a non-critical image compression algorithm. I start by presenting the approach on a benchmark application containing an instrumented bug into the lower criticality task; then, we improved it by injecting random errors inside the lower criticality task’s memory space through a debugger. In the latter case, thanks to the HIL, it is possible to pause the time domain simulation when the debugger operates and resume it once the injection is complete. In this way, it is possible to interact with the target without interfering with the simulation results, combining a full control of the target with an accurate time-domain assessment. The last contribution of this third group is about a methodology to verify, on multi-agent robots, the synchronization between two agents in charge to move the end effector of a delta robot: the correct position and speed of the end effector at any time is strongly affected by a loss of synchronization. The last two contributions may seem unrelated to the automotive industry, but interest in these applications is gaining. Mixed-criticality systems allow reducing the number of ECUs inside cars (for cost reduction), while the multi-agent approach is helpful to improve the cooperation of the connected cars with respect to other vehicles and the infrastructure. The fourth contribution, contained in the appendix, is about a machine learning application to improve the social acceptance of autonomous vehicles. The idea is to improve the comfort of the passengers by recognizing their emotions. I started with the idea to modify the vehicle’s driving style based on a real-time emotions recognition system but, due to the difficulties of performing such operations in an experimental setup, I move to analyze them offline. The emotions are determined on volunteers’ facial expressions recorded while viewing 3D representa- tions showing different calibrations. Thanks to the passengers’ emotional responses, it is possible to choose the better calibration from the comfort point of view

    Novel Validation Techniques for Autonomous Vehicles

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Exploring Strategies for Adapting Traditional Vehicle Design Frameworks to Autonomous Vehicle Design

    Get PDF
    Fully autonomous vehicles are expected to revolutionize transportation, reduce the cost of ownership, contribute to a cleaner environment, and prevent the majority of traffic accidents and related fatalities. Even though promising approaches for achieving full autonomy exist, developers and manufacturers have to overcome a multitude of challenged before these systems could find widespread adoption. This multiple case study explored the strategies some IT hardware and software developers of self-driving cars use to adapt traditional vehicle design frameworks to address consumer and regulatory requirements in autonomous vehicle designs. The population consisted of autonomous driving technology software and hardware developers who are currently working on fully autonomous driving technologies from or within the United States, regardless of their specialization. The theory of dynamic capabilities was the conceptual framework used for the study. Interviews from 7 autonomous vehicle hard and software engineers, together with 15 archival documents, provided the data points for the study. A thematic analysis was used to code and group results by themes. When looking at the results through the lens of dynamic capability theory, notable themes included regulatory uncertainty, functional safety, rapid iteration, and achieving a competitive advantage. Based on the findings of the study, implications for social change include the need for better regulatory frameworks to provide certainty, consumer education to manage expectations, and universal development standards that could integrate regulatory and design needs into a single approach

    Development and certification of mixed-criticality embedded systems based on probabilistic timing analysis

    Get PDF
    An increasing variety of emerging systems relentlessly replaces or augments the functionality of mechanical subsystems with embedded electronics. For quantity, complexity, and use, the safety of such subsystems is an increasingly important matter. Accordingly, those systems are subject to safety certification to demonstrate system's safety by rigorous development processes and hardware/software constraints. The massive augment in embedded processors' complexity renders the arduous certification task significantly harder to achieve. The focus of this thesis is to address the certification challenges in multicore architectures: despite their potential to integrate several applications on a single platform, their inherent complexity imperils their timing predictability and certification. Recently, the Measurement-Based Probabilistic Timing Analysis (MBPTA) technique emerged as an alternative to deal with hardware/software complexity. The innovation that MBPTA brings about is, however, a major step from current certification procedures and standards. The particular contributions of this Thesis include: (i) the definition of certification arguments for mixed-criticality integration upon multicore processors. In particular we propose a set of safety mechanisms and procedures as required to comply with functional safety standards. For timing predictability, (ii) we present a quantitative approach to assess the likelihood of execution-time exceedance events with respect to the risk reduction requirements on safety standards. To this end, we build upon the MBPTA approach and we present the design of a safety-related source of randomization (SoR), that plays a key role in the platform-level randomization needed by MBPTA. And (iii) we evaluate current certification guidance with respect to emerging high performance design trends like caches. Overall, this Thesis pushes the certification limits in the use of multicore and MBPTA technology in Critical Real-Time Embedded Systems (CRTES) and paves the way towards their adoption in industry.Una creciente variedad de sistemas emergentes reemplazan o aumentan la funcionalidad de subsistemas mecánicos con componentes electrónicos embebidos. El aumento en la cantidad y complejidad de dichos subsistemas electrónicos así como su cometido, hacen de su seguridad una cuestión de creciente importancia. Tanto es así que la comercialización de estos sistemas críticos está sujeta a rigurosos procesos de certificación donde se garantiza la seguridad del sistema mediante estrictas restricciones en el proceso de desarrollo y diseño de su hardware y software. Esta tesis trata de abordar los nuevos retos y dificultades dadas por la introducción de procesadores multi-núcleo en dichos sistemas críticos: aunque su mayor rendimiento despierta el interés de la industria para integrar múltiples aplicaciones en una sola plataforma, suponen una mayor complejidad. Su arquitectura desafía su análisis temporal mediante los métodos tradicionales y, asimismo, su certificación es cada vez más compleja y costosa. Con el fin de lidiar con estas limitaciones, recientemente se ha desarrollado una novedosa técnica de análisis temporal probabilístico basado en medidas (MBPTA). La innovación de esta técnica, sin embargo, supone un gran cambio cultural respecto a los estándares y procedimientos tradicionales de certificación. En esta línea, las contribuciones de esta tesis están agrupadas en tres ejes principales: (i) definición de argumentos de seguridad para la certificación de aplicaciones de criticidad-mixta sobre plataformas multi-núcleo. Se definen, en particular, mecanismos de seguridad, técnicas de diagnóstico y reacción de faltas acorde con el estándar IEC 61508 sobre una arquitectura multi-núcleo de referencia. Respecto al análisis temporal, (ii) presentamos la cuantificación de la probabilidad de exceder un límite temporal y su relación con los requisitos de reducción de riesgos derivados de los estándares de seguridad funcional. Con este fin, nos basamos en la técnica MBPTA y presentamos el diseño de una fuente de números aleatorios segura; un componente clave para conseguir las propiedades aleatorias requeridas por MBPTA a nivel de plataforma. Por último, (iii) extrapolamos las guías actuales para la certificación de arquitecturas multi-núcleo a una solución comercial de 8 núcleos y las evaluamos con respecto a las tendencias emergentes de diseño de alto rendimiento (caches). Con estas contribuciones, esta tesis trata de abordar los retos que el uso de procesadores multi-núcleo y MBPTA implican en el proceso de certificación de sistemas críticos de tiempo real y facilita, de esta forma, su adopción por la industria.Postprint (published version
    corecore