3,119 research outputs found

    On universality of concurrent expressions with synchronization primitives

    Get PDF
    AbstractConcurrent expressions are a class of extended regular expressions with a shuffle operator (‖) and its closure (). The class of concurrent expressions with synchronization primitives, called synchronized concurrent expressions, is introduced as an extended model of Shaw's flow expressions. This paper discusses some formal properties of synchronized concurrent expressions from a formal language theoretic point of view. It is shown that synchronized concurrent expressions with three signal/wait operations are universal in the sense that they can simulate any semaphore controlled concurrent expressions and they can describe the class of recursively enumerable sets. Some results on semaphore controlled regular expressions are also included to give a taste of more positive results

    Liveness of extended control structure nets

    Get PDF
    Journal ArticleA new subclass of Petri nets is defined allowing the control structure representation of parallel programs including arbitrary semaphore operations. Structural properties of the "Extended Control Structure Nets" are discussed and seven necessary and sufficient conditions for their liveness are proved

    Higher levels of process synchronisation

    Get PDF
    Four new synchronisation primitives (SEMAPHOREs, RESOURCEs, EVENTs and BUCKETs) were introduced in the KRoC 0.8beta release of occam for SPARC (SunOS/Solaris) and Alpha (OSF/1) UNIX workstations [1][2][3]. This paper reports on the rationale, application and implementation of two of these (SEMAPHOREs and EVENTs). Details on the other two may be found on the web [4]. The new primitives are designed to support higher-level mechanisms of SHARING between parallel processes and give us greater powers of expression. They will also let greater levels of concurrency be safely exploited from future parallel architectures, such as those providing (virtual) shared-memory. They demonstrate that occam is neutral in any debate between the merits of message-passing versus shared-memory parallelism, enabling applications to take advantage of whichever paradigm (or mixture of paradigms) is the most appropriate. The new primitives could be (but are not) implemented in terms of traditional channels, but only at the expense of increased complexity and computational overhead. The primitives are immediately useful even for uni-processors - for example, the cost of a fair ALT can be reduced from O(n) to O(1). In fact, all the operations associated with new primitives have constant space and time complexities; and the constants are very low. The KRoC release provides an Abstract Data Type interface to the primitives. However, direct use of such mechanisms still allows the user to misuse them. They must be used in the ways prescribed (in this paper and in [4]) else their semantics become unpredictable. No tool is provided to check correct usage at this level. The intention is to bind those primitives found to be useful into higher level versions of occam. Some of the primitives (e.g. SEMAPHOREs) may never themselves be made visible in the language, but may be used to implement bindings of higher-level paradigms (such as SHARED channels and BLACKBOARDs). The compiler will perform the relevant usage checking on all new language bindings, closing the security loopholes opened by raw use of the primitives. The paper closes by relating this work with the notions of virtual transputers, microcoded schedulers, object orientation and Java threads

    A comprehensive approach in performance evaluation for modernreal-time operating systems

    Get PDF
    In real-time computing the accurate characterization of the performance and determinism that a particular real-time operating system/hardware combination can provide for real-time applications is essential. This issue is not properly addressed by existing performance metrics mainly due to the lack of completeness and generalization. In this paper we present a set of comprehensive, easy-to-implement and useful metrics covering three basic real-time operating system features: response to external events, intertask synchronization and resource sharing, and intertask data transferring. The evaluation of real-time operating systems using a set of fine-grained metrics is fundamental to guarantee that we can reach the required determinism in real-world applications.Publicad

    Verification of Shared-Reading Synchronisers

    Get PDF
    Synchronisation classes are an important building block for shared memory concurrent programs. Thus to reason about such programs, it is important to be able to verify the implementation of these synchronisation classes, considering atomic operations as the synchronisation primitives on which the implementations are built. For synchronisation classes controlling exclusive access to a shared resource, such as locks, a technique has been proposed to reason about their behaviour. This paper proposes a technique to verify implementations of both exclusive access and shared-reading synchronisers. We use permission-based Separation Logic to describe the behaviour of the main atomic operations, and the basis for our technique is formed by a specification for class AtomicInteger, which is commonly used to implement synchronisation classes in java.util.concurrent. To demonstrate the applicability of our approach, we mechanically verify the implementation of various synchronisation classes like Semaphore, CountDownLatch and Lock.Comment: In Proceedings MeTRiD 2018, arXiv:1806.0933

    A Machine-Independent port of the MPD language run time system to NetBSD

    Full text link
    SR (synchronizing resources) is a PASCAL - style language enhanced with constructs for concurrent programming developed at the University of Arizona in the late 1980s. MPD (presented in Gregory Andrews' book about Foundations of Multithreaded, Parallel, and Distributed Programming) is its successor, providing the same language primitives with a different, more C-style, syntax. The run-time system (in theory, identical, but not designed for sharing) of those languages provides the illusion of a multiprocessor machine on a single Unix-like system or a (local area) network of Unix-like machines. Chair V of the Computer Science Department of the University of Bonn is operating a laboratory for a practical course in parallel programming consisting of computing nodes running NetBSD/arm, normally used via PVM, MPI etc. We are considering to offer SR and MPD for this, too. As the original language distributions were only targeted at a few commercial Unix systems, some porting effort is needed. However, some of the porting effort of our earlier SR port should be reusable. The integrated POSIX threads support of NetBSD-2.0 and later allows us to use library primitives provided for NetBSD's phtread system to implement the primitives needed by the SR run-time system, thus implementing 13 target CPUs at once and automatically making use of SMP on VAX, Alpha, PowerPC, Sparc, 32-bit Intel and 64 bit AMD CPUs. We'll present some methods used for the impementation and compare some performance values to the traditional implementation.Comment: 6 page
    corecore