27 research outputs found

    An Object-Oriented Model for Extensible Concurrent Systems: the Composition-Filters Approach

    Get PDF
    Applying the object-oriented paradigm for the development of large and complex software systems offers several advantages, of which increased extensibility and reusability are the most prominent ones. The object-oriented model is also quite suitable for modeling concurrent systems. However, it appears that extensibility and reusability of concurrent applications is far from trivial. The problems that arise, the so-called inheritance anomalies are analyzed and presented in this paper. A set of requirements for extensible concurrent languages is formulated. As a solution to the identified problems, an extension to the object-oriented model is presented; composition filters. Composition filters capture messages and can express certain constraints and operations on these messages, for example buffering. In this paper we explain the composition filters approach, demonstrate its expressive power through a number of examples and show that composition filters do not suffer from the inheritance anomalies and fulfill the requirements that were established

    Capsule-oriented programming

    Get PDF
    Explicit concurrency should be abolished from all higher-level programming languages (i.e. everything except - perhaps- plain machine code.). Dijkstra [1] (paraphrased). A promising class of concurrency abstractions replaces explicit concurrency mechanisms with a single linguistic mechanism that combines state and control and uses asynchronous messages for communications, e.g. active objects or actors, but that doesn\u27t remove the hurdle of understanding non-local control transfer. What if the programming model enabled programmers to simply do what they do best, that is, to describe a system in terms of its modular structure and write sequential code to implement the operations of those modules and handles details of concurrency? In a recently sponsored NSF project we are developing such a model that we call capsule-oriented programming and its realization in the Panini project. This model favors modularity over explicit concurrency, encourages concurrency correctness by construction, and exploits modular structure of programs to expose implicit concurrency

    Object-Oriented Programming Language Facilities for Concurrency Control

    Get PDF
    Concurrent object-oriented programming systems require support for concurrency control, to enforce consistent commitment of changes and to support program-initiated rollback after application-specific failures. We have explored three different concurrency control models -- atomic blocks, serializable transactions, and commit-serializable transactions -- as part of the MELD programming language. We present our designs, discuss certain programming problems and implementation issues, and compare our work on MELD to other concurrent object-based systems

    An Implementation of the Object-Oriented Concurrent Programming Language SINA

    Get PDF
    SINA is an object-oriented language for distributed and concurrent programming. The primary focus of this paper is on the object-oriented concurrent programming mechanisms of SINA and their implementation. This paper presents the SINA constructs for concurrent programming and inter-object communication, some illustrative examples and a message-based implementation model for SINA that we have used in our current implementation

    Concurrent Scheme

    Get PDF
    Journal ArticleThis paper describes an evolution of the Scheme language to support parallelism with tight coupling of control and data. Mechanisms are presented to address the difficult and related problems of mutual exclusion and data sharing which arise in concurrent language systems. The mechanisms are tailored to preserve Scheme semantics as much as possible while allowing for efficient implementation. Prototype implementations of the resulting language are described which have been completed. A third implementation is underway for the Mayfly, a distributed memory, twisted-torus communication topology, parallel processor, under development at the Hewlett-Packard Research Laboratories. The language model is particularly well suited for the Mayfly processor, as will be shown

    Abstraction and performance, together at last: auto-tuning message-passing concurrency on the Java virtual machine

    Get PDF
    Performance tuning is the leading justification for breaking abstraction boundaries. We target this problem for message passing concurrency (MPC) abstractions on the Java Virtual Machine (JVM). Efficient mapping of MPC abstractions to threads is critical for performance, scalability, and CPU utilization; but tedious and time consuming to perform manually. We solve this problem by putting forth a technique for automatically mapping MPC abstractions to JVM threads. In general, this mapping cannot be found in polynomial time. Our surprising observation is that characteristics of MPC abstractions and their communication patterns can be very revealing, and can help determine the mapping. Our technique addresses a number of challenges that leads to improved performance: i) balancing the computations across JVM threads, ii) reducing the communication overheads, iii) utilizing the information about cache locality, and iv) mapping MPC abstractions to threads in a way that reduces the contention between JVM threads. We have realized our technique in the Panini language that has capsules as an MPC abstraction. We also compare our mapping technique against four default mapping techniques: thread-all, round-robin-task-all, random-task-all and work-stealing. Our evaluation on wide range of benchmark programs shows that our mapping technique can improve the performance by 30%-60% over default mapping techniques

    Effectively Mapping Linguistic Abstractions for Message-passing Concurrency to Threads on the Java Virtual Machine

    Get PDF
    Efficient mapping of message passing concurrency (MPC) abstractions to Java Virtual Machine (JVM) threads is critical for performance, scalability, and CPU utilization; but tedious and time consuming to perform manually. In general, this mapping cannot be found in polynomial time, but we show that by exploiting the local characteristics of MPC abstractions and their communication patterns this mapping can be determined effectively. We describe our MPC abstraction to thread mapping technique, its realization in two frameworks (Panini and Akka), and its rigorous evaluation using several benchmarks from representative MPC frameworks. We also compare our technique against four default mapping techniques: thread-all, round-robin-task-all, random-task-all and work-stealing. Our evaluation shows that our mapping technique can improve the performance by 30%-60% over default mapping techniques. These improvements are due to a number of challenges addressed by our technique namely: i) balancing the computations across JVM threads, ii) reducing the communication overheads, iii) utilizing information about cache locality, and iv) mapping MPC abstractions to threads in a way that reduces the contention between JVM threads
    corecore