1,049 research outputs found

    A Review of Formal Methods applied to Machine Learning

    Full text link
    We review state-of-the-art formal methods applied to the emerging field of the verification of machine learning systems. Formal methods can provide rigorous correctness guarantees on hardware and software systems. Thanks to the availability of mature tools, their use is well established in the industry, and in particular to check safety-critical applications as they undergo a stringent certification process. As machine learning is becoming more popular, machine-learned components are now considered for inclusion in critical systems. This raises the question of their safety and their verification. Yet, established formal methods are limited to classic, i.e. non machine-learned software. Applying formal methods to verify systems that include machine learning has only been considered recently and poses novel challenges in soundness, precision, and scalability. We first recall established formal methods and their current use in an exemplar safety-critical field, avionic software, with a focus on abstract interpretation based techniques as they provide a high level of scalability. This provides a golden standard and sets high expectations for machine learning verification. We then provide a comprehensive and detailed review of the formal methods developed so far for machine learning, highlighting their strengths and limitations. The large majority of them verify trained neural networks and employ either SMT, optimization, or abstract interpretation techniques. We also discuss methods for support vector machines and decision tree ensembles, as well as methods targeting training and data preparation, which are critical but often neglected aspects of machine learning. Finally, we offer perspectives for future research directions towards the formal verification of machine learning systems

    Artificial Intelligence and Bank Soundness: Between the Devil and the Deep Blue Sea - Part 2

    Get PDF
    Banks have experienced chronic weaknesses as well as frequent crisis over the years. As bank failures are costly and affect global economies, banks are constantly under intense scrutiny by regulators. This makes banks the most highly regulated industry in the world today. As banks grow into the 21st century framework, banks are in need to embrace Artificial Intelligence (AI) to not only to provide personalized world class service to its large database of customers but most importantly to survive. The chapter provides a taxonomy of bank soundness in the face of AI through the lens of CAMELS where C (Capital), A(Asset), M(Management), E(Earnings), L(Liquidity), S(Sensitivity). The taxonomy partitions challenges from the main strand of CAMELS into distinct categories of AI into 1(C), 4(A), 17(M), 8 (E), 1(L), 2(S) categories that banks and regulatory teams need to consider in evaluating AI use in banks. Although AI offers numerous opportunities to enable banks to operate more efficiently and effectively, at the same time banks also need to give assurance that AI ‘do no harm’ to stakeholders. Posing many unresolved questions, it seems that banks are trapped between the devil and the deep blue sea for now

    Dependability for declarative mechanisms: neural networks in autonomous vehicles decision making.

    Get PDF
    Despite being introduced in 1958, neural networks appeared in numerous applications of different fields in the last decade. This change was possible thanks to the reduced costs of computing power required for deep neural networks, and increasing available data that provide examples for training sets. The 2012 ImageNet image classification competition is often used as a example to describe how neural networks became at this time good candidates for applications: during this competition a neural network based solution won for the first time. In the following editions, all winning solutions were based on neural networks. Since then, neural networks have shown great results in several non critical applications (image recognition, sound recognition, text analysis, etc...). There is a growing interest to use them in critical applications as their ability to generalize makes them good candidates for applications such as autonomous vehicles, but standards do not allow that yet. Autonomous driving functions are currently researched by the industry with the final objective of producing in the near future fully autonomous vehicles, as defined by the fifth level of the SAE international (Society of Automotive Engineers) classification. Autonomous driving process is usually decomposed into four different parts: the where sensors get information from the environment, the where the data from the different sensors is merged into one representation of the environment, the that uses the representation of the environment to decide what should be the vehicles behavior and the commands to send to the actuators and finally the part that implements these commands. In this thesis, following the interest of the company Stellantis, we will focus on the decision part of this process, considering neural network based solution. Automotive being a safety critical application, it is required to implement and ensure the dependability of the systems, and this is why neural networks use is not allowed at the moment: their lack of safety forbid their use in such applications. Dependability methods for classical software systems are well known, but neural networks do not have yet similar dependable mechanisms to guarantee their trust. This problem is due to several reasons, among them the difficulty to test applications with a quasi-infinite operational domain and whose functions are hard to define exhaustively in the specifications. Here we can find the motivation of this thesis: how can we ensure the dependability of neural networks in the context of decision for autonomous vehicles? Research is now being conducted on the topic of dependability and safety of neural networks with several approaches being considered and our research is motivated by the great potential in safety critical applications mentioned above. In this thesis, we will focus on one category of method that seems to be a good candidate to ensure the dependability of neural networks by solving some of the problems of testing: the formal verification for neural networks. These methods aim to prove that a neural network respects a safety property on an entire range of its input and output domains. Formal verification is already used in other domains and is seen as a trusted method to give confidence in a system, but it remains for the moment a research topic for neural networks with currently no industrial applications. The main contributions of this thesis are the following: a proposal of a characterization of neural network from a software development perspective, and a corresponding classification of their faults, errors and failures, the identification of a potential threat to the use of formal verification. This threat is the erroneous neural network model problem, that may lead to trust a formally validated safety property that does not hold in real life, the realization of an experiment that implements a formal verification for neural networks in an autonomous driving application that is to the best of our knowledge the closest to industrial use. For this application, we chose to work with an ACC (Adaptive Cruise Control) function, which is an autonomous driving function that performs the longitudinal control of a vehicle. The experiment is conducted with the use of a simulator and a neural network formal verification tool. The other contributions of the thesis are the following: theoretical example of the erroneous neural network model problem and a practical example in our autonomous driving experiment, a proposal of detection and recovery mechanisms as a solution to the erroneous model problem mentioned above, an implementation of these detection and recovery mechanisms in our autonomous driving experiment and a discussion about difficulties and possible processes for the implementation of formal verification for neural networks that we developed during our experiments

    Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing

    Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing
    • 

    corecore