5 research outputs found

    Functional Uncertainty in Real-Time Safety-Critical Systems

    Get PDF
    Safety-critical cyber-physical systems increasingly use components that are unable to provide deterministic guarantees of the correctness of their functional outputs; rather, they characterize each outcome of a computation with an associated uncertainty regarding its correctness. The problem of assuring correctness in such systems is considered. A model is proposed in which components are characterized by bounds on the degree of uncertainty under both worst-case and typical circumstances; the objective is to assure safety under all circumstances while optimizing for performance for typical circumstances. A problem of selecting components for execution in order to obtain a result of a certain minimum uncertainty as soon as possible, while guaranteeing to do so within a specified deadline, is considered. An optimal semi-adaptive algorithm for solving this problem is derived. The scalability of this algorithm is investigated via simulation experiments comparing this semi-adaptive scheme with a purely static approach

    Optimally ordering IDK classifiers subject to deadlines

    Get PDF
    A classifier is a software component, often based on Deep Learning, that categorizes each input provided to it into one of a fixed set of classes. An IDK classifier may additionally output “I Don’t Know” (IDK) for certain inputs. Multiple distinct IDK classifiers may be available for the same classification problem, offering different trade-offs between effectiveness, i.e. the probability of successful classification, and efficiency, i.e. execution time. Optimal offline algorithms are proposed for sequentially ordering IDK classifiers such that the expected duration to successfully classify an input is minimized, optionally subject to a hard deadline on the maximum time permitted for classification. Solutions are provided considering independent and dependent relationships between pairs of classifiers, as well as a mix of the two

    Unanimous Prediction for 100\% Precision with Application to Learning Semantic Mappings

    No full text
    © 2016 Association for Computational Linguistics. Can we train a system that, on any new input, either says "don't know" or makes a prediction that is guaranteed to be correct? We answer the question in the affirmative provided our model family is wellspecified. Specifically, we introduce the unanimity principle: only predict when all models consistent with the training data predict the same output. We operationalize this principle for semantic parsing, the task of mapping utterances to logical forms. We develop a simple, efficient method that reasons over the infinite set of all consistent models by only checking two of the models. We prove that our method obtains 100% precision even with a modest amount of training data from a possibly adversarial distribution. Empirically, we demonstrate the effectiveness of our approach on the standard GeoQuery dataset
    corecore