11,787 research outputs found

    Optimizing for confidence - Costs and opportunities at the frontier between abstraction and reality

    Full text link
    Is there a relationship between computing costs and the confidence people place in the behavior of computing systems? What are the tuning knobs one can use to optimize systems for human confidence instead of correctness in purely abstract models? This report explores these questions by reviewing the mechanisms by which people build confidence in the match between the physical world behavior of machines and their abstract intuition of this behavior according to models or programming language semantics. We highlight in particular that a bottom-up approach relies on arbitrary trust in the accuracy of I/O devices, and that there exists clear cost trade-offs in the use of I/O devices in computing systems. We also show various methods which alleviate the need to trust I/O devices arbitrarily and instead build confidence incrementally "from the outside" by considering systems as black box entities. We highlight cases where these approaches can reach a given confidence level at a lower cost than bottom-up approaches.Comment: 11 pages, 1 figur

    Semantic Service Substitution in Pervasive Environments

    Get PDF
    A computing infrastructure where everything is a service offers many new system and application possibilities. Among the main challenges, however, is the issue of service substitution for the application execution in such heterogeneous environments. An application would like to continue to execute even when a service disappears, or it would like to benefit from the environment by using better services with better QoS when possible. In this article, we define a generic service model and describe the equivalence relations between services considering the functionalities they propose and their non functional QoS properties. We define semantic equivalence relations between services and equivalence degree between non functional QoS properties. Using these relations we propose semantic substitution mechanisms upon the appearance and disappearance of services that fits the application needs. We developed a prototype as a proof of concept and evaluated its efficiency over a real use case

    User-friendly Support for Common Concepts in a Lightweight Verifier

    Full text link
    Machine verification of formal arguments can only increase our confidence in the correctness of those arguments, but the costs of employing machine verification still outweigh the benefits for some common kinds of formal reasoning activities. As a result, usability is becoming increasingly important in the design of formal verification tools. We describe the "aartifact" lightweight verification system, designed for processing formal arguments involving basic, ubiquitous mathematical concepts. The system is a prototype for investigating potential techniques for improving the usability of formal verification systems. It leverages techniques drawn both from existing work and from our own efforts. In addition to a parser for a familiar concrete syntax and a mechanism for automated syntax lookup, the system integrates (1) a basic logical inference algorithm, (2) a database of propositions governing common mathematical concepts, and (3) a data structure that computes congruence closures of expressions involving relations found in this database. Together, these components allow the system to better accommodate the expectations of users interested in verifying formal arguments involving algebraic and logical manipulations of numbers, sets, vectors, and related operators and predicates. We demonstrate the reasonable performance of this system on typical formal arguments and briefly discuss how the system's design contributed to its usability in two case studies

    Validate implementation correctness using simulation: the TASTE approach

    Get PDF
    High-integrity systems operate in hostile environment and must guarantee a continuous operational state, even if unexpected events happen. In addition, these systems have stringent requirements that must be validated and correctly translated from high-level specifications down to code. All these constraints make the overall development process more time-consuming. This becomes especially complex because the number of system functions keeps increasing over the years. As a result, engineers must validate system implementation and check that its execution conforms to the specifications. To do so, a traditional approach consists in a manual instrumentation of the implementation code to trace system activity while operating. However, this might be error-prone because modifications are not automatic and still made manually. Furthermore, such modifications may have an impact on the actual behavior of the system. In this paper, we present an approach to validate a system implementation by comparing execution against simulation. In that purpose, we adapt TASTE, a set of tools that eases system development by automating each step as much as possible. In particular, TASTE automates system implementation from functional (system functions description with their properties – period, deadline, priority, etc.) and deployment(processors, buses, devices to be used) models. We tailored this tool-chain to create traces during system execution. Generated output shows activation time of each task, usage of communication ports (size of the queues, instant of events pushed/pulled, etc.) and other relevant execution metrics to be monitored. As a consequence, system engineers can check implementation correctness by comparing simulation and execution metrics

    DataHub: Collaborative Data Science & Dataset Version Management at Scale

    Get PDF
    Relational databases have limited support for data collaboration, where teams collaboratively curate and analyze large datasets. Inspired by software version control systems like git, we propose (a) a dataset version control system, giving users the ability to create, branch, merge, difference and search large, divergent collections of datasets, and (b) a platform, DataHub, that gives users the ability to perform collaborative data analysis building on this version control system. We outline the challenges in providing dataset version control at scale.Comment: 7 page
    • …
    corecore