22 research outputs found

    On Using Blockchains for Safety-Critical Systems

    Full text link
    Innovation in the world of today is mainly driven by software. Companies need to continuously rejuvenate their product portfolios with new features to stay ahead of their competitors. For example, recent trends explore the application of blockchains to domains other than finance. This paper analyzes the state-of-the-art for safety-critical systems as found in modern vehicles like self-driving cars, smart energy systems, and home automation focusing on specific challenges where key ideas behind blockchains might be applicable. Next, potential benefits unlocked by applying such ideas are presented and discussed for the respective usage scenario. Finally, a research agenda is outlined to summarize remaining challenges for successfully applying blockchains to safety-critical cyber-physical systems

    Error Checking for Sparse Systolic Tensor Arrays

    Full text link
    Structured sparsity is an efficient way to prune the complexity of modern Machine Learning (ML) applications and to simplify the handling of sparse data in hardware. In such cases, the acceleration of structured-sparse ML models is handled by sparse systolic tensor arrays. The increasing prevalence of ML in safety-critical systems requires enhancing the sparse tensor arrays with online error detection for managing random hardware failures. Algorithm-based fault tolerance has been proposed as a low-cost mechanism to check online the result of computations against random hardware failures. In this work, we address a key architectural challenge with structured-sparse tensor arrays: how to provide online error checking for a range of structured sparsity levels while maintaining high utilization of the hardware. Experimental results highlight the minimum hardware overhead incurred by the proposed checking logic and its error detection properties after injecting random hardware faults on sparse tensor arrays that execute layers of ResNet50 CNN.Comment: AICAS 202

    Towards Structured Evaluation of Deep Neural Network Supervisors

    Full text link
    Deep Neural Networks (DNN) have improved the quality of several non-safety related products in the past years. However, before DNNs should be deployed to safety-critical applications, their robustness needs to be systematically analyzed. A common challenge for DNNs occurs when input is dissimilar to the training set, which might lead to high confidence predictions despite proper knowledge of the input. Several previous studies have proposed to complement DNNs with a supervisor that detects when inputs are outside the scope of the network. Most of these supervisors, however, are developed and tested for a selected scenario using a specific performance metric. In this work, we emphasize the need to assess and compare the performance of supervisors in a structured way. We present a framework constituted by four datasets organized in six test cases combined with seven evaluation metrics. The test cases provide varying complexity and include data from publicly available sources as well as a novel dataset consisting of images from simulated driving scenarios. The latter we plan to make publicly available. Our framework can be used to support DNN supervisor evaluation, which in turn could be used to motive development, validation, and deployment of DNNs in safety-critical applications.Comment: Preprint of paper accepted for presentation at The First IEEE International Conference on Artificial Intelligence Testing, April 4-9, 2019, San Francisco East Bay, California, US
    corecore