1,010 research outputs found

    The derivation of distributed termination detection algorithms from garbage collection schemes

    Get PDF
    It is shown that the termination detection problem for distributed computations can be modelled as an instance of the garbage collection problem. Consequently, algorithms for the termination detection problem are obtained by applying transformations to garbage collection algorithms. The transformation can be applied to collectors of the "mark-and-sweep" type as well as to reference counting garbage collectors. As examples, the scheme is used to transform the distributed reference counting protocol of Lermen and Maurer, the weighted reference counting protocol, the local reference counting protocol, and Ben-Ari's mark-and-sweep collector into termination detection algorithms. Known termination detection algorithms as well as new variants are obtained

    Dynamic Race Prediction in Linear Time

    Full text link
    Writing reliable concurrent software remains a huge challenge for today's programmers. Programmers rarely reason about their code by explicitly considering different possible inter-leavings of its execution. We consider the problem of detecting data races from individual executions in a sound manner. The classical approach to solving this problem has been to use Lamport's happens-before (HB) relation. Until now HB remains the only approach that runs in linear time. Previous efforts in improving over HB such as causally-precedes (CP) and maximal causal models fall short due to the fact that they are not implementable efficiently and hence have to compromise on their race detecting ability by limiting their techniques to bounded sized fragments of the execution. We present a new relation weak-causally-precedes (WCP) that is provably better than CP in terms of being able to detect more races, while still remaining sound. Moreover it admits a linear time algorithm which works on the entire execution without having to fragment it.Comment: 22 pages, 8 figures, 1 algorithm, 1 tabl

    On Verifying Causal Consistency

    Get PDF
    Causal consistency is one of the most adopted consistency criteria for distributed implementations of data structures. It ensures that operations are executed at all sites according to their causal precedence. We address the issue of verifying automatically whether the executions of an implementation of a data structure are causally consistent. We consider two problems: (1) checking whether one single execution is causally consistent, which is relevant for developing testing and bug finding algorithms, and (2) verifying whether all the executions of an implementation are causally consistent. We show that the first problem is NP-complete. This holds even for the read-write memory abstraction, which is a building block of many modern distributed systems. Indeed, such systems often store data in key-value stores, which are instances of the read-write memory abstraction. Moreover, we prove that, surprisingly, the second problem is undecidable, and again this holds even for the read-write memory abstraction. However, we show that for the read-write memory abstraction, these negative results can be circumvented if the implementations are data independent, i.e., their behaviors do not depend on the data values that are written or read at each moment, which is a realistic assumption.Comment: extended version of POPL 201

    IMPAC: An Integrated Methodology for Propulsion and Airframe Control

    Get PDF
    The National Aeronautics and Space Administration is actively involved in the development of enabling technologies that will lead towards aircraft with new/enhanced maneuver capabilities such as Short Take-Off Vertical Landing (STOVL) and high angle of attack performance. Because of the high degree of dynamic coupling between the airframe and propulsion systems of these types of aircraft, one key technology is the integration of the flight and propulsion control. The NASA Lewis Research Center approach to developing Integrated Flight Propulsion Control (IFPC) technologies is an in-house research program referred to as IMPAC (Integrated Methodology for Propulsion and Airframe Control). The goals of IMPAC are to develop a viable alternative to the existing integrated control design methodologies that will allow for improved system performance and simplicity of control law synthesis and implementation, and to demonstrate the applicability of the methodology to a supersonic STOVL fighter aircraft. Based on some preliminary control design studies that included evaluation of the existing methodologies, the IFPC design methodology that is emerging at the Lewis Research Center consists of considering the airframe and propulsion system as one integrated system for an initial centralized controller design and then partitioning the centralized controller into separate airframe and propulsion system subcontrollers to ease implementation and to set meaningful design requirements for detailed subsystem control design and evaluation. An overview of IMPAC is provided and detailed discussion of the various important design and evaluation steps in the methodology are included

    I2PA, U-prove, and Idemix: An Evaluation of Memory Usage and Computing Time Efficiency in an IoT Context

    Full text link
    The Internet of Things (IoT), in spite of its innumerable advantages, brings many challenges namely issues about users' privacy preservation and constraints about lightweight cryptography. Lightweight cryptography is of capital importance since IoT devices are qualified to be resource-constrained. To address these challenges, several Attribute-Based Credentials (ABC) schemes have been designed including I2PA, U-prove, and Idemix. Even though these schemes have very strong cryptographic bases, their performance in resource-constrained devices is a question that deserves special attention. This paper aims to conduct a performance evaluation of these schemes on issuance and verification protocols regarding memory usage and computing time. Recorded results show that both I2PA and U-prove present very interesting results regarding memory usage and computing time while Idemix presents very low performance with regard to computing time

    Link Prediction with Social Vector Clocks

    Full text link
    State-of-the-art link prediction utilizes combinations of complex features derived from network panel data. We here show that computationally less expensive features can achieve the same performance in the common scenario in which the data is available as a sequence of interactions. Our features are based on social vector clocks, an adaptation of the vector-clock concept introduced in distributed computing to social interaction networks. In fact, our experiments suggest that by taking into account the order and spacing of interactions, social vector clocks exploit different aspects of link formation so that their combination with previous approaches yields the most accurate predictor to date.Comment: 9 pages, 6 figure
    • …
    corecore