52,362 research outputs found

    Logical Concurrency Control from Sequential Proofs

    Full text link
    We are interested in identifying and enforcing the isolation requirements of a concurrent program, i.e., concurrency control that ensures that the program meets its specification. The thesis of this paper is that this can be done systematically starting from a sequential proof, i.e., a proof of correctness of the program in the absence of concurrent interleavings. We illustrate our thesis by presenting a solution to the problem of making a sequential library thread-safe for concurrent clients. We consider a sequential library annotated with assertions along with a proof that these assertions hold in a sequential execution. We show how we can use the proof to derive concurrency control that ensures that any execution of the library methods, when invoked by concurrent clients, satisfies the same assertions. We also present an extension to guarantee that the library methods are linearizable or atomic

    Composite Learning Control With Application to Inverted Pendulums

    Full text link
    Composite adaptive control (CAC) that integrates direct and indirect adaptive control techniques can achieve smaller tracking errors and faster parameter convergence compared with direct and indirect adaptive control techniques. However, the condition of persistent excitation (PE) still has to be satisfied to guarantee parameter convergence in CAC. This paper proposes a novel model reference composite learning control (MRCLC) strategy for a class of affine nonlinear systems with parametric uncertainties to guarantee parameter convergence without the PE condition. In the composite learning, an integral during a moving-time window is utilized to construct a prediction error, a linear filter is applied to alleviate the derivation of plant states, and both the tracking error and the prediction error are applied to update parametric estimates. It is proven that the closed-loop system achieves global exponential-like stability under interval excitation rather than PE of regression functions. The effectiveness of the proposed MRCLC has been verified by the application to an inverted pendulum control problem.Comment: 5 pages, 6 figures, conference submissio

    IST Austria Technical Report

    Get PDF
    We present an algorithmic method for the synthesis of concurrent programs that are optimal with respect to quantitative performance measures. The input consists of a sequential sketch, that is, a program that does not contain synchronization constructs, and of a parametric performance model that assigns costs to actions such as locking, context switching, and idling. The quantitative synthesis problem is to automatically introduce synchronization constructs into the sequential sketch so that both correctness is guaranteed and worst-case (or average-case) performance is optimized. Correctness is formalized as race freedom or linearizability. We show that for worst-case performance, the problem can be modeled as a 2-player graph game with quantitative (limit-average) objectives, and for average-case performance, as a 2 1/2 -player graph game (with probabilistic transitions). In both cases, the optimal correct program is derived from an optimal strategy in the corresponding quantitative game. We prove that the respective game problems are computationally expensive (NP-complete), and present several techniques that overcome the theoretical difficulty in cases of concurrent programs of practical interest. We have implemented a prototype tool and used it for the automatic syn- thesis of programs that access a concurrent list. For certain parameter val- ues, our method automatically synthesizes various classical synchronization schemes for implementing a concurrent list, such as fine-grained locking or a lazy algorithm. For other parameter values, a new, hybrid synchronization style is synthesized, which uses both the lazy approach and coarse-grained locks (instead of standard fine-grained locks). The trade-off occurs because while fine-grained locking tends to decrease the cost that is due to waiting for locks, it increases cache size requirements

    Lock-free Concurrent Data Structures

    Full text link
    Concurrent data structures are the data sharing side of parallel programming. Data structures give the means to the program to store data, but also provide operations to the program to access and manipulate these data. These operations are implemented through algorithms that have to be efficient. In the sequential setting, data structures are crucially important for the performance of the respective computation. In the parallel programming setting, their importance becomes more crucial because of the increased use of data and resource sharing for utilizing parallelism. The first and main goal of this chapter is to provide a sufficient background and intuition to help the interested reader to navigate in the complex research area of lock-free data structures. The second goal is to offer the programmer familiarity to the subject that will allow her to use truly concurrent methods.Comment: To appear in "Programming Multi-core and Many-core Computing Systems", eds. S. Pllana and F. Xhafa, Wiley Series on Parallel and Distributed Computin
    corecore