9,983 research outputs found

    On model checking data-independent systems with arrays without reset

    Full text link
    A system is data-independent with respect to a data type X iff the operations it can perform on values of type X are restricted to just equality testing. The system may also store, input and output values of type X. We study model checking of systems which are data-independent with respect to two distinct type variables X and Y, and may in addition use arrays with indices from X and values from Y . Our main interest is the following parameterised model-checking problem: whether a given program satisfies a given temporal-logic formula for all non-empty nite instances of X and Y . Initially, we consider instead the abstraction where X and Y are infinite and where partial functions with finite domains are used to model arrays. Using a translation to data-independent systems without arrays, we show that the u-calculus model-checking problem is decidable for these systems. From this result, we can deduce properties of all systems with finite instances of X and Y . We show that there is a procedure for the above parameterised model-checking problem of the universal fragment of the u-calculus, such that it always terminates but may give false negatives. We also deduce that the parameterised model-checking problem of the universal disjunction-free fragment of the u-calculus is decidable. Practical motivations for model checking data-independent systems with arrays include verification of memory and cache systems, where X is the type of memory addresses, and Y the type of storable values. As an example we verify a fault-tolerant memory interface over a set of unreliable memories.Comment: Appeared in Theory and Practice of Logic Programming, vol. 4, no. 5&6, 200

    Schedulability analysis of timed CSP models using the PAT model checker

    Get PDF
    Timed CSP can be used to model and analyse real-time and concurrent behaviour of embedded control systems. Practical CSP implementations combine the CSP model of a real-time control system with prioritized scheduling to achieve efficient and orderly use of limited resources. Schedulability analysis of a timed CSP model of a system with respect to a scheduling scheme and a particular execution platform is important to ensure that the system design satisfies its timing requirements. In this paper, we propose a framework to analyse schedulability of CSP-based designs for non-preemptive fixed-priority multiprocessor scheduling. The framework is based on the PAT model checker and the analysis is done with dense-time model checking on timed CSP models. We also provide a schedulability analysis workflow to construct and analyse, using the proposed framework, a timed CSP model with scheduling from an initial untimed CSP model without scheduling. We demonstrate our schedulability analysis workflow on a case study of control software design for a mobile robot. The proposed approach provides non-pessimistic schedulability results

    Configurable 3D-integrated focal-plane sensor-processor array architecture

    Get PDF
    A mixed-signal Cellular Visual Microprocessor architecture with digital processors is described. An ASIC implementation is also demonstrated. The architecture is composed of a regular sensor readout circuit array, prepared for 3D face-to-face type integration, and one or several cascaded array of mainly identical (SIMD) processing elements. The individual array elements derived from the same general HDL description and could be of different in size, aspect ratio, and computing resources

    The natural history of bugs: using formal methods to analyse software related failures in space missions

    Get PDF
    Space missions force engineers to make complex trade-offs between many different constraints including cost, mass, power, functionality and reliability. These constraints create a continual need to innovate. Many advances rely upon software, for instance to control and monitor the next generation ‘electron cyclotron resonance’ ion-drives for deep space missions.Programmers face numerous challenges. It is extremely difficult to conduct valid ground-based tests for the code used in space missions. Abstract models and simulations of satellites can be misleading. These issues are compounded by the use of ‘band-aid’ software to fix design mistakes and compromises in other aspects of space systems engineering. Programmers must often re-code missions in flight. This introduces considerable risks. It should, therefore, not be a surprise that so many space missions fail to achieve their objectives. The costs of failure are considerable. Small launch vehicles, such as the U.S. Pegasus system, cost around 18million.Payloadsrangefrom18 million. Payloads range from 4 million up to 1billionforsecurityrelatedsatellites.Thesecostsdonotincludeconsequentbusinesslosses.In2005,Intelsatwroteoff1 billion for security related satellites. These costs do not include consequent business losses. In 2005, Intelsat wrote off 73 million from the failure of a single uninsured satellite. It is clearly important that we learn as much as possible from those failures that do occur. The following pages examine the roles that formal methods might play in the analysis of software failures in space missions

    A Silicon Surface Code Architecture Resilient Against Leakage Errors

    Get PDF
    Spin qubits in silicon quantum dots are one of the most promising building blocks for large scale quantum computers thanks to their high qubit density and compatibility with the existing semiconductor technologies. High fidelity single-qubit gates exceeding the threshold of error correction codes like the surface code have been demonstrated, while two-qubit gates have reached 98\% fidelity and are improving rapidly. However, there are other types of error --- such as charge leakage and propagation --- that may occur in quantum dot arrays and which cannot be corrected by quantum error correction codes, making them potentially damaging even when their probability is small. We propose a surface code architecture for silicon quantum dot spin qubits that is robust against leakage errors by incorporating multi-electron mediator dots. Charge leakage in the qubit dots is transferred to the mediator dots via charge relaxation processes and then removed using charge reservoirs attached to the mediators. A stabiliser-check cycle, optimised for our hardware, then removes the correlations between the residual physical errors. Through simulations we obtain the surface code threshold for the charge leakage errors and show that in our architecture the damage due to charge leakage errors is reduced to a similar level to that of the usual depolarising gate noise. Spin leakage errors in our architecture are constrained to only ancilla qubits and can be removed during quantum error correction via reinitialisations of ancillae, which ensure the robustness of our architecture against spin leakage as well. Our use of an elongated mediator dots creates spaces throughout the quantum dot array for charge reservoirs, measuring devices and control gates, providing the scalability in the design

    DeSyRe: on-Demand System Reliability

    No full text
    The DeSyRe project builds on-demand adaptive and reliable Systems-on-Chips (SoCs). As fabrication technology scales down, chips are becoming less reliable, thereby incurring increased power and performance costs for fault tolerance. To make matters worse, power density is becoming a significant limiting factor in SoC design, in general. In the face of such changes in the technological landscape, current solutions for fault tolerance are expected to introduce excessive overheads in future systems. Moreover, attempting to design and manufacture a totally defect and fault-free system, would impact heavily, even prohibitively, the design, manufacturing, and testing costs, as well as the system performance and power consumption. In this context, DeSyRe delivers a new generation of systems that are reliable by design at well-balanced power, performance, and design costs. In our attempt to reduce the overheads of fault-tolerance, only a small fraction of the chip is built to be fault-free. This fault-free part is then employed to manage the remaining fault-prone resources of the SoC. The DeSyRe framework is applied to two medical systems with high safety requirements (measured using the IEC 61508 functional safety standard) and tight power and performance constraints

    On Decidable Growth-Rate Properties of Imperative Programs

    Full text link
    In 2008, Ben-Amram, Jones and Kristiansen showed that for a simple "core" programming language - an imperative language with bounded loops, and arithmetics limited to addition and multiplication - it was possible to decide precisely whether a program had certain growth-rate properties, namely polynomial (or linear) bounds on computed values, or on the running time. This work emphasized the role of the core language in mitigating the notorious undecidability of program properties, so that one deals with decidable problems. A natural and intriguing problem was whether more elements can be added to the core language, improving its utility, while keeping the growth-rate properties decidable. In particular, the method presented could not handle a command that resets a variable to zero. This paper shows how to handle resets. The analysis is given in a logical style (proof rules), and its complexity is shown to be PSPACE-complete (in contrast, without resets, the problem was PTIME). The analysis algorithm evolved from the previous solution in an interesting way: focus was shifted from proving a bound to disproving it, and the algorithm works top-down rather than bottom-up

    New results on rewrite-based satisfiability procedures

    Full text link
    Program analysis and verification require decision procedures to reason on theories of data structures. Many problems can be reduced to the satisfiability of sets of ground literals in theory T. If a sound and complete inference system for first-order logic is guaranteed to terminate on T-satisfiability problems, any theorem-proving strategy with that system and a fair search plan is a T-satisfiability procedure. We prove termination of a rewrite-based first-order engine on the theories of records, integer offsets, integer offsets modulo and lists. We give a modularity theorem stating sufficient conditions for termination on a combinations of theories, given termination on each. The above theories, as well as others, satisfy these conditions. We introduce several sets of benchmarks on these theories and their combinations, including both parametric synthetic benchmarks to test scalability, and real-world problems to test performances on huge sets of literals. We compare the rewrite-based theorem prover E with the validity checkers CVC and CVC Lite. Contrary to the folklore that a general-purpose prover cannot compete with reasoners with built-in theories, the experiments are overall favorable to the theorem prover, showing that not only the rewriting approach is elegant and conceptually simple, but has important practical implications.Comment: To appear in the ACM Transactions on Computational Logic, 49 page

    Ground terminal expert (GTEX). Part 2: Expert system diagnostics for a 30/20 Gigahertz satellite transponder

    Get PDF
    A research effort was undertaken to investigate how expert system technology could be applied to a satellite communications system. The focus of the expert system is the satellite earth station. A proof of concept expert system called the Ground Terminal Expert (GTEX) was developed at the University of Akron in collaboration with the NASA Lewis Research Center. With the increasing demand for satellite earth stations, maintenance is becoming a vital issue. Vendors of such systems will be looking for cost effective means of maintaining such systems. The objective of GTEX is to aid in diagnosis of faults occurring with the digital earth station. GTEX was developed on a personal computer using the Automated Reasoning Tool for Information Management (ART-IM) developed by the Inference Corporation. Developed for the Phase 2 digital earth station, GTEX is a part of the Systems Integration Test and Evaluation (SITE) facility located at the NASA Lewis Research Center
    • 

    corecore