57 research outputs found

    Novel Validation Techniques for Autonomous Vehicles

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Novel Validation Techniques for Autonomous Vehicles

    Get PDF
    The automotive industry is facing challenges in producing electrical, connected, and autonomous vehicles. Even if these challenges are, from a technical point of view, independent from each other, the market and regulatory bodies require them to be developed and integrated simultaneously. The development of autonomous vehicles implies the development of highly dependable systems. This is a multidisciplinary activity involving knowledge from robotics, computer science, electrical and mechanical engineering, psychology, social studies, and ethics. Nowadays, many Advanced Driver Assistance Systems (ADAS), like Emergency Braking System, Lane Keep Assistant, and Park Assist, are available. Newer luxury cars can drive by themselves on highways or park automatically, but the end goal is to develop completely autonomous driving vehicles, able to go by themselves, without needing human interventions in any situation. The more vehicles become autonomous, the greater the difficulty in keeping them reliable. It enhances the challenges in terms of development processes since their misbehaviors can lead to catastrophic consequences and, differently from the past, there is no more a human driver to mitigate the effects of erroneous behaviors. Primary threats to dependability come from three sources: misuse from the drivers, design systematic errors, and random hardware failures. These safety threats are addressed under various aspects, considering the particular type of item to be designed. In particular, for the sake of this work, we analyze those related to Functional Safety (FuSa), viewed as the ability of a system to react on time and in the proper way to the external environment. From the technological point of view, these behaviors are implemented by electrical and electronic items. Various standards to achieve FuSa have been released over the years. The first, released in 1998, was the IEC 61508. Its last version is the one released in 2010. This standard defines mainly: • a Functional Safety Management System (FSMS); • methods to determine a Safety Integrated Level (SIL); • methods to determine the probability of failures. To adapt the IEC61508 to the automotive industry’s peculiarity, a newer standard, the ISO26262, was released in 2011 then updated in 2018. This standard provides guidelines about FSMS, called in this case Safety Lifecycle, describing how to develop software and hardware components suitable for functional safety. It also provides a different way to compute the SIL, called in this case Automotive SIL (ASIL), allowing us to consider the average driver’s abilities to control the vehicle in case of failures. Moreover, it describes a way to determine the probability of random hardware failures through Failure Mode, Effects, and Diagnostic Analysis (FMEDA). This dissertation contains contributions to three topics: • random hardware failures mitigation; • improvementoftheISO26262HazardAnalysisandRiskAssessment(HARA); • real-time verification of the embedded software. As the main contribution of this dissertation, I address the safety threats due to random hardware failures (RHFs). For this purpose, I propose a novel simulation-based approach to aid the Failure Mode, Effects, and Diagnostic Analysis (FMEDA) required by the ISO26262 standard. Thanks to a SPICE-level model of the item, and the adoption of fault injection techniques, it is possible to simulate its behaviors obtaining useful information to classify the various failure modes. The proposed approach evolved from a mere simulation of the item, allowing only an item-level failure mode classification up to a vehicle-level analysis. The propagation of the failure modes’ effects on the whole vehicle enables us to assess the impacts on the vehicle’s drivability, improving the quality of the classifications. It can be advantageous where it is difficult to predict how the item-level misbehaviors propagate to the vehicle level, as in the case of a virtual differential gear or the mobility system of a robot. It has been chosen since it can be considered similar to the novel light vehicles, such as electric scooters, that are becoming more and more popular. Moreover, my research group has complete access to its design since it is realized by our university’s DIANA students’ team. When a SPICE-level simulation is too long to be performed, or it is not possible to develop a complete model of the item due to intellectual property protection rules, it is possible to aid this process through behavioral models of the item. A simulation of this kind has been performed on a mobile robotic system. Behavioral models of the electronic components were used, alongside mechanical simulations, to assess the software failure mitigation capabilities. Another contribution has been obtained by modifying the main one. The idea was to make it possible to aid also the Hazard Analysis and Risk Assessment (HARA). This assessment is performed during the concept phase, so before starting to design the item implementation. Its goal is to determine the hazards involved in the item functionality and their associated levels of risk. The end goal of this phase is a list of safety goals. For each one of these safety goals, an ASIL has to be determined. Since HARA relies only on designers expertise and knowledge, it lacks in objectivity and repeatability. Thanks to the simulation results, it is possible to predict the effects of the failures on the vehicle’s drivability, allowing us to improve the severity and controllability assessment, thus improving the objectivity. Moreover, since simulation conditions can be stored, it is possible, at any time, to recheck the results and to add new scenarios, improving the repeatability. The third group of contributions is about the real-time verification of embedded software. Through Hardware-In-the-Loop (HIL), a software integration verification has been performed to test a fundamental automotive component, mixed-criticality applications, and multi-agent robots. The first of these contributions is about real-time tests on Body Control Modules (BCM). These modules manage various electronic accessories in the vehicle’s body, like power windows and mirrors, air conditioning, immobilizer, central locking. The main characteristics of BCMs are the communications with other embedded computers via the car’s vehicle bus (Controller Area Network) and to have a high number (hundreds) of low-speed I/Os. As the second contribution, I propose a methodology to assess the error recovery system’s effects on mixed-criticality applications regarding deadline misses. The system runs two tasks: a critical airplane longitudinal control and a non-critical image compression algorithm. I start by presenting the approach on a benchmark application containing an instrumented bug into the lower criticality task; then, we improved it by injecting random errors inside the lower criticality task’s memory space through a debugger. In the latter case, thanks to the HIL, it is possible to pause the time domain simulation when the debugger operates and resume it once the injection is complete. In this way, it is possible to interact with the target without interfering with the simulation results, combining a full control of the target with an accurate time-domain assessment. The last contribution of this third group is about a methodology to verify, on multi-agent robots, the synchronization between two agents in charge to move the end effector of a delta robot: the correct position and speed of the end effector at any time is strongly affected by a loss of synchronization. The last two contributions may seem unrelated to the automotive industry, but interest in these applications is gaining. Mixed-criticality systems allow reducing the number of ECUs inside cars (for cost reduction), while the multi-agent approach is helpful to improve the cooperation of the connected cars with respect to other vehicles and the infrastructure. The fourth contribution, contained in the appendix, is about a machine learning application to improve the social acceptance of autonomous vehicles. The idea is to improve the comfort of the passengers by recognizing their emotions. I started with the idea to modify the vehicle’s driving style based on a real-time emotions recognition system but, due to the difficulties of performing such operations in an experimental setup, I move to analyze them offline. The emotions are determined on volunteers’ facial expressions recorded while viewing 3D representa- tions showing different calibrations. Thanks to the passengers’ emotional responses, it is possible to choose the better calibration from the comfort point of view

    Multilevel Simulation Methodology for FMECA Study Applied to a Complex Cyber-Physical System

    Get PDF
    Complex systems are composed of numerous interconnected subsystems, each designed to perform specific functions. The different subsystems use many technological items that work together, as for the case of cyber-physical systems. Typically, a cyber-physical system is composed of different mechanical actuators driven by electrical power devices and monitored by sensors. Several approaches are available for designing and validating complex systems, and among them, behavioral-level modeling is becoming one of the most popular. When such cyber-physical systems are employed in mission- or safety-critical applications, it is mandatory to understand the impacts of faults on them and how failures in subsystems can propagate through the overall system. In this paper, we propose a methodology for supporting the failure mode, effects, and criticality analysis (FMECA) aimed at identifying the critical faults and assessing their effects on the overall system. The end goal is to analyze how a fault affecting a single subsystem possibly propagates through the whole cyber-physical system, considering also the embedded software and the mechanical elements. In particular, our approach allows the analysis of the propagation through the whole system (working at high level) of a fault injected at low level. This paper provides a solution to automate the FMECA process (until now mainly performed manually) for complex cyber-physical systems. It improves the failure classification effectiveness: considering our test case, it reduced the number of critical faults from 10 to 6. The remaining four faults are mitigated by the cyber-physical system architecture. The proposed approach has been tested on a real cyber-physical system in charge of driving a three-phase motor for industrial compressors, showing its feasibility and effectiveness

    Formal verification of automotive embedded UML designs

    Get PDF
    Software applications are increasingly dominating safety critical domains. Safety critical domains are domains where the failure of any application could impact human lives. Software application safety has been overlooked for quite some time but more focus and attention is currently directed to this area due to the exponential growth of software embedded applications. Software systems have continuously faced challenges in managing complexity associated with functional growth, flexibility of systems so that they can be easily modified, scalability of solutions across several product lines, quality and reliability of systems, and finally the ability to detect defects early in design phases. AUTOSAR was established to develop open standards to address these challenges. ISO-26262, automotive functional safety standard, aims to ensure functional safety of automotive systems by providing requirements and processes to govern software lifecycle to ensure safety. Each functional system needs to be classified in terms of safety goals, risks and Automotive Safety Integrity Level (ASIL: A, B, C and D) with ASIL D denoting the most stringent safety level. As risk of the system increases, ASIL level increases and the standard mandates more stringent methods to ensure safety. ISO-26262 mandates that ASILs C and D classified systems utilize walkthrough, semi-formal verification, inspection, control flow analysis, data flow analysis, static code analysis and semantic code analysis techniques to verify software unit design and implementation. Ensuring software specification compliance via formal methods has remained an academic endeavor for quite some time. Several factors discourage formal methods adoption in the industry. One major factor is the complexity of using formal methods. Software specification compliance in automotive remains in the bulk heavily dependent on traceability matrix, human based reviews, and testing activities conducted on either actual production software level or simulation level. ISO26262 automotive safety standard recommends, although not strongly, using formal notations in automotive systems that exhibit high risk in case of failure yet the industry still heavily relies on semi-formal notations such as UML. The use of semi-formal notations makes specification compliance still heavily dependent on manual processes and testing efforts. In this research, we propose a framework where UML finite state machines are compiled into formal notations, specification requirements are mapped into formal model theorems and SAT/SMT solvers are utilized to validate implementation compliance to specification. The framework will allow semi-formal verification of AUTOSAR UML designs via an automated formal framework backbone. This semi-formal verification framework will allow automotive software to comply with ISO-26262 ASIL C and D unit design and implementation formal verification guideline. Semi-formal UML finite state machines are automatically compiled into formal notations based on Symbolic Analysis Laboratory formal notation. Requirements are captured in the UML design and compiled automatically into theorems. Model Checkers are run against the compiled formal model and theorems to detect counterexamples that violate the requirements in the UML model. Semi-formal verification of the design allows us to uncover issues that were previously detected in testing and production stages. The methodology is applied on several automotive systems to show how the framework automates the verification of UML based designs, the de-facto standard for automotive systems design, based on an implicit formal methodology while hiding the cons that discouraged the industry from using it. Additionally, the framework automates ISO-26262 system design verification guideline which would otherwise be verified via human error prone approaches

    Integration of security standards in DevOps pipelines: An industry case study

    Get PDF
    In the last decade, companies adopted DevOps as a fast path to deliver software products according to customer expectations, with well aligned teams and in continuous cycles. As a basic practice, DevOps relies on pipelines that simulate factory swim-lanes. The more automation in the pipeline, the shorter a lead time is supposed to be. However, applying DevOps is challenging, particularly for industrial control systems (ICS) that support critical infrastructures and that must obey to rigorous requirements from security regulations and standards. Current research on security compliant DevOps presents open gaps for this particular domain and in general for systematic application of security standards. In this paper, we present a systematic approach to integrate standard-based security activities into DevOps pipelines and highlight their automation potential. Our intention is to share our experiences and help practitioners to overcome the trade-off between adding security activities into the development process and keeping a short lead time. We conducted an evaluation of our approach at a large industrial company considering the IEC 62443-4-1 security standard that regulates ICS. The results strengthen our confidence in the usefulness of our approach and artefacts, and in that they can support practitioners to achieve security compliance while preserving agility including short lead times.info:eu-repo/semantics/acceptedVersio

    Development and certification of mixed-criticality embedded systems based on probabilistic timing analysis

    Get PDF
    An increasing variety of emerging systems relentlessly replaces or augments the functionality of mechanical subsystems with embedded electronics. For quantity, complexity, and use, the safety of such subsystems is an increasingly important matter. Accordingly, those systems are subject to safety certification to demonstrate system's safety by rigorous development processes and hardware/software constraints. The massive augment in embedded processors' complexity renders the arduous certification task significantly harder to achieve. The focus of this thesis is to address the certification challenges in multicore architectures: despite their potential to integrate several applications on a single platform, their inherent complexity imperils their timing predictability and certification. Recently, the Measurement-Based Probabilistic Timing Analysis (MBPTA) technique emerged as an alternative to deal with hardware/software complexity. The innovation that MBPTA brings about is, however, a major step from current certification procedures and standards. The particular contributions of this Thesis include: (i) the definition of certification arguments for mixed-criticality integration upon multicore processors. In particular we propose a set of safety mechanisms and procedures as required to comply with functional safety standards. For timing predictability, (ii) we present a quantitative approach to assess the likelihood of execution-time exceedance events with respect to the risk reduction requirements on safety standards. To this end, we build upon the MBPTA approach and we present the design of a safety-related source of randomization (SoR), that plays a key role in the platform-level randomization needed by MBPTA. And (iii) we evaluate current certification guidance with respect to emerging high performance design trends like caches. Overall, this Thesis pushes the certification limits in the use of multicore and MBPTA technology in Critical Real-Time Embedded Systems (CRTES) and paves the way towards their adoption in industry.Una creciente variedad de sistemas emergentes reemplazan o aumentan la funcionalidad de subsistemas mecánicos con componentes electrónicos embebidos. El aumento en la cantidad y complejidad de dichos subsistemas electrónicos así como su cometido, hacen de su seguridad una cuestión de creciente importancia. Tanto es así que la comercialización de estos sistemas críticos está sujeta a rigurosos procesos de certificación donde se garantiza la seguridad del sistema mediante estrictas restricciones en el proceso de desarrollo y diseño de su hardware y software. Esta tesis trata de abordar los nuevos retos y dificultades dadas por la introducción de procesadores multi-núcleo en dichos sistemas críticos: aunque su mayor rendimiento despierta el interés de la industria para integrar múltiples aplicaciones en una sola plataforma, suponen una mayor complejidad. Su arquitectura desafía su análisis temporal mediante los métodos tradicionales y, asimismo, su certificación es cada vez más compleja y costosa. Con el fin de lidiar con estas limitaciones, recientemente se ha desarrollado una novedosa técnica de análisis temporal probabilístico basado en medidas (MBPTA). La innovación de esta técnica, sin embargo, supone un gran cambio cultural respecto a los estándares y procedimientos tradicionales de certificación. En esta línea, las contribuciones de esta tesis están agrupadas en tres ejes principales: (i) definición de argumentos de seguridad para la certificación de aplicaciones de criticidad-mixta sobre plataformas multi-núcleo. Se definen, en particular, mecanismos de seguridad, técnicas de diagnóstico y reacción de faltas acorde con el estándar IEC 61508 sobre una arquitectura multi-núcleo de referencia. Respecto al análisis temporal, (ii) presentamos la cuantificación de la probabilidad de exceder un límite temporal y su relación con los requisitos de reducción de riesgos derivados de los estándares de seguridad funcional. Con este fin, nos basamos en la técnica MBPTA y presentamos el diseño de una fuente de números aleatorios segura; un componente clave para conseguir las propiedades aleatorias requeridas por MBPTA a nivel de plataforma. Por último, (iii) extrapolamos las guías actuales para la certificación de arquitecturas multi-núcleo a una solución comercial de 8 núcleos y las evaluamos con respecto a las tendencias emergentes de diseño de alto rendimiento (caches). Con estas contribuciones, esta tesis trata de abordar los retos que el uso de procesadores multi-núcleo y MBPTA implican en el proceso de certificación de sistemas críticos de tiempo real y facilita, de esta forma, su adopción por la industria.Postprint (published version

    Zuverlässige und Energieeffiziente gemischt-kritische Echtzeit On-Chip Systeme

    Get PDF
    Multi- and many-core embedded systems are increasingly becoming the target for many applications that require high performance under varying conditions. A resulting challenge is the control, and reliable operation of such complex multiprocessing architectures under changes, e.g., high temperature and degradation. In mixed-criticality systems where many applications with varying criticalities are consolidated on the same execution platform, fundamental isolation requirements to guarantee non-interference of critical functions are crucially important. While Networks-on-Chip (NoCs) are the prevalent solution to provide scalable and efficient interconnects for the multiprocessing architectures, their associated energy consumption has immensely increased. Specifically, hard real-time NoCs must manifest limited energy consumption as thermal runaway in such a core shared resource jeopardizes the whole system guarantees. Thus, dynamic energy management of NoCs, as opposed to the related work static solutions, is highly necessary to save energy and decrease temperature, while preserving essential temporal requirements. In this thesis, we introduce a centralized management to provide energy-aware NoCs for hard real-time systems. The design relies on an energy control network, developed on top of an existing switch arbitration network to allow isolation between energy optimization and data transmission. The energy control layer includes local units called Power-Aware NoC controllers that dynamically optimize NoC energy depending on the global state and applications’ temporal requirements. Furthermore, to adapt to abnormal situations that might occur in the system due to degradation, we extend the concept of NoC energy control to include the entire system scope. That is, online resource management employing hierarchical control layers to treat system degradation (imminent core failures) is supported. The mechanism applies system reconfiguration that involves workload migration. For mixed-criticality systems, it allows flexible boundaries between safety-critical and non-critical subsystems to safely apply the reconfiguration, preserving fundamental safety requirements and temporal predictability. Simulation and formal analysis-based experiments on various realistic usecases and benchmarks are conducted showing significant improvements in NoC energy-savings and in treatment of system degradation for mixed-criticality systems improving dependability over the status quo.Eingebettete Many- und Multi-core-Systeme werden zunehmend das Ziel für Anwendungen, die hohe Anfordungen unter unterschiedlichen Bedinungen haben. Für solche hochkomplexed Multi-Prozessor-Systeme ist es eine grosse Herausforderung zuverlässigen Betrieb sicherzustellen, insbesondere wenn sich die Umgebungseinflüsse verändern. In Systeme mit gemischter Kritikalität, in denen viele Anwendungen mit unterschiedlicher Kritikalität auf derselben Ausführungsplattform bedient werden müssen, sind grundlegende Isolationsanforderungen zur Gewährleistung der Nichteinmischung kritischer Funktionen von entscheidender Bedeutung. Während On-Chip Netzwerke (NoCs) häufig als skalierbare Verbindung für die Multiprozessor-Architekturen eingesetzt werden, ist der damit verbundene Energieverbrauch immens gestiegen. Daher sind dynamische Plattformverwaltungen, im Gegensatz zu den statischen, zwingend notwendig, um ein System an die oben genannten Veränderungen anzupassen und gleichzeitig Timing zu gewährleisten. In dieser Arbeit entwickeln wir energieeffiziente NoCs für harte Echtzeitsysteme. Das Design basiert auf einem Energiekontrollnetzwerk, das auf einem bestehenden Switch-Arbitration-Netzwerk entwickelt wurde, um eine Isolierung zwischen Energieoptimierung und Datenübertragung zu ermöglichen. Die Energiesteuerungsschicht umfasst lokale Einheiten, die als Power-Aware NoC-Controllers bezeichnet werden und die die NoC-Energie in Abhängigkeit vom globalen Zustand und den zeitlichen Anforderungen der Anwendungen optimieren. Darüber hinaus wird das Konzept der NoC-Energiekontrolle zur Anpassung an Anomalien, die aufgrund von Abnutzung auftreten können, auf den gesamten Systemumfang ausgedehnt. Online- Ressourcenverwaltungen, die hierarchische Kontrollschichten zur Behandlung Abnutzung (drohender Kernausfälle) einsetzen, werden bereitgestellt. Bei Systemen mit gemischter Kritikalität erlaubt es flexible Grenzen zwischen sicherheitskritischen und unkritischen Subsystemen, um die Rekonfiguration sicher anzuwenden, wobei grundlegende Sicherheitsanforderungen erhalten bleiben und Timing Vorhersehbarkeit. Experimente werden auf der Basis von Simulationen und formalen Analysen zu verschiedenen realistischen Anwendungsfallen und Benchmarks durchgeführt, die signifikanten Verbesserungen bei On-Chip Netzwerke-Energieeinsparungen und bei der Behandlung von Abnutzung für Systeme mit gemischter Kritikalität zur Verbesserung die Systemstabilität gegenüber dem bisherigen Status quo zeigen

    MC2: Multicore and Cache Analysis via Deterministic and Probabilistic Jitter Bounding

    Get PDF
    In critical domains, reliable software execution is increasingly involving aspects related to the timing dimension. This is due to the advent of high-performance (complex) hardware, used to provide the rising levels of guaranteed performance needed in those domains. Caches and multicores are two of the hardware features that have the potential to significantly reduce WCET estimates, yet they pose new challenges on current-practice measurement-based timing analysis (MBTA) approaches. In this paper we propose MC2, a technique for multilevel-cache multicores that combines deterministic and probabilistic jitter-bounding approaches to reliably handle both the variability in execution time generated by caches and the contention in accessing shared hardware resources. We evaluate MC2 on a COTS quad-core LEON-based board and our initial results show how it effectively captures cache and multicore contention in pWCET estimates with respect to actual observed values.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Jaume Abella has been partially supported by the MINECO under Ramon y Cajal postdoctoral fellowship number RYC-2013-14717. Carles Hernández is jointly funded by the MINECO and FEDER funds through grant TIN2014-60404-JIN.Peer ReviewedPostprint (author's final draft

    Maximising microprocessor reliability through game theory and heuristics

    Get PDF
    PhD ThesisEmbedded Systems are becoming ever more pervasive in our society, with most routine daily tasks now involving their use in some form and the market predicted to be worth USD 220 billion, a rise of 300%, by 2018. Consumers expect more functionality with each design iteration, but for no detriment in perceived performance. These devices can range from simple low-cost chips to expensive and complex systems and are a major cost driver in the equipment design phase. For more than 35 years, designers have kept pace with Moore's Law, but as device size approaches the atomic limit, layouts are becoming so complicated that current scheduling techniques are also reaching their limit, meaning that more resource must be reserved to manage and deliver reliable operation. With the advent of many-core systems and further sources of unpredictability such as changeable power supplies and energy harvesting, this reservation of capability may become so large that systems will not be operating at their peak efficiency. These complex systems can be controlled through many techniques, with jobs scheduled either online prior to execution beginning or online at each time or event change. Increased processing power and job types means that current online scheduling methods that employ exhaustive search techniques will not be suitable to define schedules for such enigmatic task lists and that new techniques using statistic-based methods must be investigated to preserve Quality of Service. A new paradigm of scheduling through complex heuristics is one way to administer these next levels of processor effectively and allow the use of more simple devices in complex systems; thus reducing unit cost while retaining reliability a key goal identified by the International Technology Roadmap for Semi-conductors for Embedded Systems in Critical Environments. These changes would be beneficial in terms of cost reduction and system exibility within the next generation of device. This thesis investigates the use of heuristics and statistical methods in the operation of real-time systems, with the feasibility of Game Theory and Statistical Process Control for the successful supervision of high-load and critical jobs investigated. Heuristics are identified as an effective method of controlling complex real-time issues, with two-person non-cooperative games delivering Nash-optimal solutions where these exist. The simplified algorithms for creating and solving Game Theory events allow for its use within small embedded RISC devices and an increase in reliability for systems operating at the apex of their limits. Within this Thesis, Heuristic and Game Theoretic algorithms for a variety of real-time scenarios are postulated, investigated, refined and tested against existing schedule types; initially through MATLAB simulation before testing on an ARM Cortex M3 architecture functioning as a simplified automotive Electronic Control Unit.Doctoral Teaching Account from the EPSRC
    • …
    corecore