1,076 research outputs found

    Dependability for declarative mechanisms: neural networks in autonomous vehicles decision making.

    Get PDF
    Despite being introduced in 1958, neural networks appeared in numerous applications of different fields in the last decade. This change was possible thanks to the reduced costs of computing power required for deep neural networks, and increasing available data that provide examples for training sets. The 2012 ImageNet image classification competition is often used as a example to describe how neural networks became at this time good candidates for applications: during this competition a neural network based solution won for the first time. In the following editions, all winning solutions were based on neural networks. Since then, neural networks have shown great results in several non critical applications (image recognition, sound recognition, text analysis, etc...). There is a growing interest to use them in critical applications as their ability to generalize makes them good candidates for applications such as autonomous vehicles, but standards do not allow that yet. Autonomous driving functions are currently researched by the industry with the final objective of producing in the near future fully autonomous vehicles, as defined by the fifth level of the SAE international (Society of Automotive Engineers) classification. Autonomous driving process is usually decomposed into four different parts: the where sensors get information from the environment, the where the data from the different sensors is merged into one representation of the environment, the that uses the representation of the environment to decide what should be the vehicles behavior and the commands to send to the actuators and finally the part that implements these commands. In this thesis, following the interest of the company Stellantis, we will focus on the decision part of this process, considering neural network based solution. Automotive being a safety critical application, it is required to implement and ensure the dependability of the systems, and this is why neural networks use is not allowed at the moment: their lack of safety forbid their use in such applications. Dependability methods for classical software systems are well known, but neural networks do not have yet similar dependable mechanisms to guarantee their trust. This problem is due to several reasons, among them the difficulty to test applications with a quasi-infinite operational domain and whose functions are hard to define exhaustively in the specifications. Here we can find the motivation of this thesis: how can we ensure the dependability of neural networks in the context of decision for autonomous vehicles? Research is now being conducted on the topic of dependability and safety of neural networks with several approaches being considered and our research is motivated by the great potential in safety critical applications mentioned above. In this thesis, we will focus on one category of method that seems to be a good candidate to ensure the dependability of neural networks by solving some of the problems of testing: the formal verification for neural networks. These methods aim to prove that a neural network respects a safety property on an entire range of its input and output domains. Formal verification is already used in other domains and is seen as a trusted method to give confidence in a system, but it remains for the moment a research topic for neural networks with currently no industrial applications. The main contributions of this thesis are the following: a proposal of a characterization of neural network from a software development perspective, and a corresponding classification of their faults, errors and failures, the identification of a potential threat to the use of formal verification. This threat is the erroneous neural network model problem, that may lead to trust a formally validated safety property that does not hold in real life, the realization of an experiment that implements a formal verification for neural networks in an autonomous driving application that is to the best of our knowledge the closest to industrial use. For this application, we chose to work with an ACC (Adaptive Cruise Control) function, which is an autonomous driving function that performs the longitudinal control of a vehicle. The experiment is conducted with the use of a simulator and a neural network formal verification tool. The other contributions of the thesis are the following: theoretical example of the erroneous neural network model problem and a practical example in our autonomous driving experiment, a proposal of detection and recovery mechanisms as a solution to the erroneous model problem mentioned above, an implementation of these detection and recovery mechanisms in our autonomous driving experiment and a discussion about difficulties and possible processes for the implementation of formal verification for neural networks that we developed during our experiments

    A Two-Component Language for Adaptation: Design, Semantics, and Program Analysis

    Get PDF
    • …
    corecore