282 research outputs found

    Author index volume 49 (1987)

    Get PDF

    Annotated bibliography on global states and times in distributed systems

    Full text link

    A suite of definitions for consistency criteria in distributed shared memories

    Get PDF
    A shared memory built on top of a distributed system constitutes a distributed shared memory (DSM). If a lot of protocols implementing DSMS in various contexts have been proposed, no set of homogeneous definitions has been given for the many semantics offered by these implementations. This paper provides a suite of such definitions for atomic, sequential, causal, PRAM and a few others consistency criteria. These definitions are based on a unique framework : a parallel computation is defined as a partial order on the set of read and write operations invoked by processes, and a consistency criterion is defined as a constraint on this partial order. Such an approach provides a simple classification of consistency criteria, from the more to the less constrained one. This paper can also be considered as a survey on consistency criteria for DSM

    Causal reasoning about distributed programs

    Get PDF
    We present an integrated approach to the specification, verification and testing of distributed programs. We show how global properties defined by transition axiom specifications can be interpreted as definitions of causal relationships between process states. We explain why reasoning about causal rather than global relationships yields a clearer picture of distributed processing.;We present a proof system for showing the partial correctness of CSP programs that places strict restrictions on assertions. It admits no global assertions. A process annotation may reference only local state. Glue predicates relate pairs of process states at points of interprocess communication. No assertion references auxiliary variables; appropriate use of control predicates and vector clock values eliminates the need for them. Our proof system emphasizes causality. We do not prove processes correct in isolation. We instead track causality as we write our annotations. When we come to a send or receive, we consider all the statements that could communicate with it, and use the semantics of CSP message passing to derive its postcondition. We show that our CSP proof system is sound and relatively complete, and that we need only recursive assertions to prove that any program in our fragment of CSP is partially correct. Our proof system is, therefore, as powerful as other proof systems for CSP.;We extend our work to develop proof systems for asynchronous communication. For each proof system, our motivation is to be able to write proofs that show that code satisfies its specification, while making only assertions we can use to define the aspects of process state that we should trace during test runs, and check during postmortem analysis. We can trace the assertions we make without having to modify program code or add synchronization or message passing.;Why, if we verify correctness, would we want to test? We observe that a proof, like a program, is susceptible to error. By tracing and analyzing program state during testing, we can build our confidence that our proof is valid

    Fifty years of Hoare's Logic

    Get PDF
    We present a history of Hoare's logic.Comment: 79 pages. To appear in Formal Aspects of Computin

    Specification-driven design of custom hardware in HOP

    Get PDF
    technical reportWe present a language "Hardware viewed as Objects and Processes" (HOP) for specifying the structure, behavior, and timing of hardware systems. HOP embodies a simple process model for lock-step synchronous processes. Processes may be described both as a black-box and as a collection of interacting sub-processes. The latter can be statically simplified using an algorithm 'PARCOMP'. PARCOMP symbolically simulates a collection of interacting processes. The advantages claimed for HOP include simple semantics, intuitiveness, high expressive power, and numerous provisions to support easily verifiable designs all the way to VLSI layout. After introducing HOP, and presenting some of the results obtained from experimenting with the HOP design system, we present the design of a large hardware system (the "Utah Simulation Engine") currently being developed to speed-up distributed discrete event simulation using Time Warp. Issues in the specification driven design of this system are discussed and illustrated using HOP

    A Survey and analysis of algorithms for the detection of termination in a distributed system

    Get PDF
    This paper looks at algorithms for the detection of termination in a distributed system and analyzes them for effectiveness and efficiency. A survey is done of the published algorithms for distributed termination and each is evaluated. Both centralized distributed systems and fully distributed systems are reviewed. The algorithms are analyzed for the overhead and conclusions are made about the situations in which they can be used, i.e. an operating system, a real-time system, or a user application. An original algorithm is presented for the asynchronous case with first-in-first-out message ordering. It allows any process to initiate detection of termination and makes use of multiple tokens

    Causal distributed assert statements

    Get PDF
    Monitoring a program\u27s execution is fundamental to the debugging, testing and maintenance phases of program development. This research addresses the issue of monitoring the execution of a distributed program. In particular, we are concerned with efficient techniques for evaluating global state predicates for distributed programs. The global state of a distributed program is not well-defined, making the monitoring task complex compared to that of a sequential programs. Processes of a distributed program execute concurrently, and the events of the program cannot be totally ordered. Each process has its own local memory, and the local memories are physically separate.;Despite the difficulties of defining a distributed computation\u27s states, monitoring a distributed program requires reasoning about constituent processes\u27 execution as a single collective entity. We have extrapolated the semantics of the sequential program\u27s assert statement into the distributed context. A distributed assert statement is a global predicate that is anchored at a control point of one processes, and that is evaluated when that process executes the assert.;We have developed a runtime method for monitoring both stable and unstable properties that does not disrupt the computation of the distributed system. A distributed assert statement is evaluated with that statement\u27s causal global state which incorporates the state of the system as a whole as it may have causal impact upon the assert statement. A runtime protocol has been implemented that constructs the causal global state and evaluates the assert statement. No additional synchronization or message passing is imposed on the distributed application although some message sizes are increased to propagate state information. The causal global state is immediately available providing real-time feedback

    Empirical Study of Concurrent Programming Paradigms

    Full text link
    Various concurrent programming paradigms have been proposed by language designers in an effort to simplify some of the unique constructs required to handle concurrent programming tasks. Despite these different approaches, however, there has been no general clear winner accepted by software developers and different paradigms are regarded to have strengths and weaknesses in certain areas. This thesis was motivated by the desire to investigate the question of whether or not there are measurable differences between two widely differing paradigms for concurrent programming: Threads vs. Communicating Sequential Processes. The mechanism for observing and comparing these paradigms was a randomized controlled trial of two groups of participants who completed identical tasks in one of the two paradigms. The study was run in Fall 2015 with 88 student participants primarily from the Department of Computer Science at UNLV. I examined programming accuracy and comprehension rates among participants in three different common shared memory problem areas introduced by concurrent programming. The results were measured using a token accuracy map algorithm which matches the token strings of a participants answer compared to a correct solution. The overall results show that for two relatively straightforward tasks using shared processes and memory, both paradigms were reasonably well understood, with a possible small learning advantage in favor of CSP in two of the tasks. In a more complex example combining task co-ordination and memory sharing, however, the participants in the CSP group struggled to grasp the guarded blocking and communication channels needed in the CSP model and performed measurably worse
    corecore