15 research outputs found

    Formal description of ML models for unambiguous implementation

    Full text link
    Implementing deep neural networks in safety critical systems, in particular in the aeronautical domain, will require to offer adequate specification paradigms to preserve the semantics of the trained model on the final hardware platform. We propose to extend the nnef language in order to allow traceable distribution and parallelisation optimizations of a trained model. We show how such a specification can be implemented in cuda on a Xavier platform

    LARD -- Landing Approach Runway Detection -- Dataset for Vision Based Landing

    Full text link
    As the interest in autonomous systems continues to grow, one of the major challenges is collecting sufficient and representative real-world data. Despite the strong practical and commercial interest in autonomous landing systems in the aerospace field, there is a lack of open-source datasets of aerial images. To address this issue, we present a dataset-lard-of high-quality aerial images for the task of runway detection during approach and landing phases. Most of the dataset is composed of synthetic images but we also provide manually labelled images from real landing footages, to extend the detection task to a more realistic setting. In addition, we offer the generator which can produce such synthetic front-view images and enables automatic annotation of the runway corners through geometric transformations. This dataset paves the way for further research such as the analysis of dataset quality or the development of models to cope with the detection tasks. Find data, code and more up-to-date information at https://github.com/deel-ai/LAR

    ACETONE: Predictable Programming Framework for ML Applications in Safety-Critical Systems

    Get PDF
    Machine learning applications have been gaining considerable attention in the field of safety-critical systems. Nonetheless, there is up to now no accepted development process that reaches classical safety confidence levels. This is the reason why we have developed a generic programming framework called ACETONE that is compliant with safety objectives (including traceability and WCET computation) for machine learning. More practically, the framework generates C code from a detailed description of off-line trained feed-forward deep neural networks that preserves the semantics of the original trained model and for which the WCET can be assessed with OTAWA. We have compared our results with Keras2c and uTVM with static runtime on a realistic set of benchmarks

    ACETONE: Predictable Programming Framework for ML Applications in Safety-Critical Systems (Artifact)

    Get PDF
    Machine learning applications have been gaining considerable attention in the field of safety-critical systems. Nonetheless, there is up to now no accepted development process that reaches classical safety confidence levels. This is the reason why we have developed a generic programming framework called ACETONE that is compliant with safety objectives (including traceability and WCET computation) for machine learning. More practically, the framework generates C code from a detailed description of off-line trained feed-forward deep neural networks that preserves the semantics of the original trained model and for which the WCET can be assessed with OTAWA. We have compared our results with Keras2c and uTVM with static runtime on a realistic set of benchmarks

    Overestimation learning with guarantees

    No full text
    International audienceWe describe a complete method that learns a neural network which is guaranteed to overestimate a reference function on a given domain. The neural network can then be used as a surrogate for the reference function. The method involves two steps. In the first step, we construct an adaptive set of Majoring Points. In the second step, we optimize a well-chosen neural network to overestimate the Majoring Points. In order to extend the guarantee on the Majoring Points to the whole domain, we necessarily have to make an assumption on the reference function. In this study, we assume that the reference function is monotonic. We provide experiments on synthetic and real problems. The experiments show that the density of the Majoring Points concentrate where the reference function varies. The learned over-estimations are both guaranteed to overestimate the reference function and are proven empirically to provide good approximations of it. Experiments on real data show that the method makes it possible to use the surrogate function in embedded systems for which an underestimation is critical; when computing the reference function requires too many resources

    Hardware/Software Codesign for Embedded Implementations of Neural Networks

    No full text
    International audienceThe performance of configurable digital circuits such as Field Programmable Gate Arrays (FPGA) increases at a very fast rate. Their fine-grain parallelism shows great similarities with connectionist models. This is the motivation for numerous works of neural network implementations on FPGAs, targeting applications such as autonomous robotics, ambulatory medical systems, etc. Nevertheless, such implementations are performed with an ASPC approach that requires a strong hardware expertise. In this paper a hardware/software codesign framework devoted to FPGA-based design and implementations of neural networks from high level specifications is presented. Such a framework aims at providing the connectionist community with efficient automatic FPGA implementations of their models without any advanced knowledge of hardware. The framework is capable of representing most standard or classical neural topologies. The internal representation of a neural model is bound to commonly used hardware computing units in a library to create the hardware model. A current developed software platform, NNetWARE-Builder, handles multilayer feedforward neural networks and graphically-designed networks of neurons and automatically compiles them onto FPGA devices with third party tools. Experimental results are presented to evaluate design and implementation tradeoffs

    Autonomous drone interception with Deep Reinforcement Learning

    No full text
    co-located with the the 31st International Joint Conference on Artificial Intelligence and the 25th European Conference on Artificial Intelligence (IJCAI-ECAI 2022)International audienceDriven by recent successes in artificial intelligence, new autonomous navigation systems are emergingin the urban space. The adoption of such systems raises questions about certification criteria and theirvulnerability to external threats. This work focuses on the automated anti-collision systems designed forautonomous drones evolving in an urban context, less controlled than the conventional airspace andmore vulnerable to potential intruders. In particular, we highlight the vulnerabilities of such systems tohijacking, taking as example the scenario of an autonomous delivery drone diverted from its mission bya malicious agent. We demonstrate the possibility of training Reinforcement Learning agents to deflecta drone equipped with an automated anti-collision system. Our contribution is threefold. Firstly, weillustrate the security vulnerabilities of these systems. Secondly, we demonstrate the effectiveness ofReinforcement Learning for automatic detection of security flaws. Thirdly, we provide the communitywith an original benchmark based on an industrial use case

    Autonomous drone interception with Deep Reinforcement Learning

    No full text
    co-located with the the 31st International Joint Conference on Artificial Intelligence and the 25th European Conference on Artificial Intelligence (IJCAI-ECAI 2022)International audienceDriven by recent successes in artificial intelligence, new autonomous navigation systems are emergingin the urban space. The adoption of such systems raises questions about certification criteria and theirvulnerability to external threats. This work focuses on the automated anti-collision systems designed forautonomous drones evolving in an urban context, less controlled than the conventional airspace andmore vulnerable to potential intruders. In particular, we highlight the vulnerabilities of such systems tohijacking, taking as example the scenario of an autonomous delivery drone diverted from its mission bya malicious agent. We demonstrate the possibility of training Reinforcement Learning agents to deflecta drone equipped with an automated anti-collision system. Our contribution is threefold. Firstly, weillustrate the security vulnerabilities of these systems. Secondly, we demonstrate the effectiveness ofReinforcement Learning for automatic detection of security flaws. Thirdly, we provide the communitywith an original benchmark based on an industrial use case

    Sheep in wolf's Clothing: Implementation Models for Dataflow Multi-Threaded Software

    Get PDF
    International audienceConcurrent programming is notoriously difficult, especially in constrained embeddedcontexts. Threads, in particular, are wildly nondeterministic as a model of computation, anddifficult to analyze in the general case. Fortunately, it is often the case that multi-threaded,semaphore-synchronized embedded software implements high-level functional specifications writtenin a deterministic data-flow language such as Scade or (safe subsets of) Simulink. We claim thatin this case the implementation process should build not just the multi-threaded C code, but (firstand foremost) a richer model exposing the dataflow organization of the computations performed bythe implementation. From this model, the C code is extracted through selective pretty-printing,while knowledge of the data-flow organization facilitates analysis. We propose a language fordescribing such implementation models that expose the data-flow behavior (the sheep) hidingunder the form of a multi-threaded program (the wolf). The language allows the representationof efficient implementations featuring pipelined scheduling and optimized memory allocation andsynchronization. We show applicability on a large-scale industrial avionics case study and on acommercial many-core.La programmation concurrente est une discipline difficile, particulièrement dansun contexte de systèmes embarqués. Les threads, en particulier, sont un modèle de calcul non-déterministe et difficile à analyser dans le cas général. Heureusement, les logiciels embarquésmulti-threadés synchronisés par sémaphores sont souvent des implémentations de spécificationsfonctionnelles de haut niveau écrites dans un langage flot-de-données déterministe comme Scadeou (un sous-ensemble sûr de) Simulink. Dans ce cas, le processus d’implémentation devrait, nonseulement construire le code C multi-threadé de l’implémentation, mais avant tout un modèleplus riche exposant l’organisation du flot-de-données des calculs effectués par le code. De cemodèle, le code C peut être extrait par du simplepretty printing. En même temps, la structureflot-de-donnée facilite l’analyse. Nous proposons un langage pour la description de tels mod-èles d’implémentations qui exposent le comportement flot-de-donnée (la brebis) déguisé en unprogramme multi-threadé (le loup). Ce langage permet une représentation d’implémentations ef-ficaces avec ordonnancement pipeliné et allocation mémoire et synchronisations optimisées. Nousmontrons son application sur un cas d’étude de l’industrie aéronautique et sur une plateformemany-coeurs commercial

    Toward the certification of safety-related systems using ML techniques: the ACAS-Xu experience

    No full text
    International audienceIn the context of the use of Machine Learning (ML) techniques in the development of safety-critical applications for both airborne and ground aeronautical products, this paper proposes elements of reasoning for a conformity to the future industrial standard. Indeed, this contribution is based on the EUROCAE WG-114/SAE G-34 ongoing standardization work that will produce the guidance to support the future certification/approval objectives. The proposed argumentation is structured using assurance case patterns that will support the demonstration of compliance with assurance objectives of the new standard. At last, these patterns are applied to the ACAS-Xu use case to contribute to a future conformity demonstration using evidences from ML development process outputs. Disclaimer: This paper is based on the EUROCAE WG-114/SAE G-34 standardization results at the time of the writing. Though some of the authors are active members of the working group, it is a free interpretation of the current draft work and only reflects the authors' view. As the working group has not published any released outcomes yet, some parts of the described argumentation may have to be modified in the future to conform to the final standard objectives
    corecore