1,174 research outputs found

    module-2.5 - Process Coordination - Basic Mechanisms

    Get PDF

    Concurrent object-oriented programming: The MP-Eiffel approach

    Get PDF
    This article evaluates several possible approaches for integrating concurrency into object-oriented programming languages, presenting afterwards, a new language named MP-Eiffel. MP-Eiffel was designed attempting to include all the essential properties of both concurrent and object-oriented programming with simplicity and safety. A special care was taken to achieve the orthogonality of all the language mechanisms, allowing their joint use without unsafe side-effects (such as inheritance anomalies)

    Temporal Localization of Error Recovery in Operating Systems by Restricting Information Flow

    Get PDF
    This study focuses on how to confine error recovery to the immediate environment of a failed computation (process) by restricting information flow through the system. A module called a manager that restricts the access of operations (procedures) to shared data representation is proposed. The use of descriptors to represent address variables (pointers) and procedure parameters is also proposed to restrict the amount of information available to a procedure. A linguistic mechanism to define recoverable data and inverse procedures (procedures that reverse the actions of another procedure) to undo completed actions is presented. A system data structure that defines a recovery environment to support system implemented recovery is presented.Computing and Information Scienc

    Scalable, reliable, power-efficient communication for hardware transactional memory

    Get PDF
    Journal ArticleIn a hardware transactional memory system with lazy versioning and lazy conflict detection, the process of transaction commit can emerge as a bottleneck. This is especially true for a large-scale distributed memory system where multiple transactions may attempt to commit simultaneously and co-ordination is required before allowing commits to proceed in parallel. In this paper, we propose novel algorithms to implement commit that are more scalable (in terms of delay and energy) and are free of deadlocks/livelocks. We show that these algorithms have similarities with the token cache coherence concept and leverage these similarities to extend the algorithms to handle message loss and starvation scenarios. The proposed algorithms improve upon the state-of-the-art by yielding up to a 7X reduction in commit delay and up to a 48X reduction in network messages. These translate into overall performance improvements of up to 66% (for synthetic workloads with average transaction length of 200 cycles), 35% (for average transaction length of 1000 cycles), 8% (for average transaction length of 4000 cycles), and 41% (for a collection of SPLASH-2 programs)

    The Problem of Mutual Exclusion: A New Distributed Solution

    Get PDF
    In both centralized and distributed systems, processes cooperate and compete with each other to access the system resources. Some of these resources must be used exclusively. It is then required that only one process access the shared resource at a given time. This is referred to as the problem of mutual exclusion. Several synchronization mechanisms have been proposed to solve this problem. In this thesis, an effort has been made to compile most of the existing mutual exclusion solutions for both shared memory and message-passing based systems. A new distributed algorithm, which uses a dynamic information structure, is presented to solve the problem of mutual exclusion. It is proved to be free from both deadlock and starvation. This solution is shown to be economical in terms of the number of message exchanges required per critical section execution. Procedures for recovery from both site and link failures are also given

    Implementation of an activity coordinator for an activity-based distributed system

    Get PDF
    Distributed computing systems offer a number of potential benefits, including: - improved fault-tolerance and reliability - increased processor availability - faster response time - flexibility of system configuration - effective management of geographically distributed resources - integration of special purpose machines into applications In order to realize this potential, support systems that aid in the development of distributed programs are needed. An Activity System facilitates the design and implementation of distributed programs: (1) By allowing the programmer to group functionally related objects into an activity (or job) which is recorded within the system. The information stored concerning relationships between objects may then be used to control their interactions and thus to manage distributed resources. (2) By effectively eliminating the need for the programmer to deal with the underlying details of inter-process communication. The system handles the establishment of communication links between objects in an activity, and controls the routing of messages to activity members. To evaluate the uses of activities in developing distributed programs, I have implemented a portion of such a system; namely, an Activity Coordinator , together with Activity System components and test tools required to verify its functionality. Within the context of an Activity System, the Activity Coordinator provides certain key functions: (1) It maintains a database of information pertaining to objects and activities, and (2) It handles the routing of activity related messages. In future versions of the activity system the Activity Coordinator may also play a more active role in fault recovery. These possibilities will also be discussed

    Compositional verification and specification of refinement for reactive systems in a dense time temporal logic

    Get PDF
    Dissertation zur Erlangung des Doktorgrades der Technischen Fakultat der Christian-Albrechts-Universitat zu Kiel. Originally available in German?This thesis introduces a compostitional dense time temporal logic for the compositions and refinement of reactive systems. A reactive system is specified by a pair consisting of a machine and a condition on the computations of this machine. In order to compose reactive systems, each step in a computation has additionally composition information such as “this is a system step”, or “this is an environment step” or “this is a communication step”. By defining a merge operator that merges two steps into one step compostionality is achieved. Because a dense time temporal logic is used refinement can be expressed easily in this logic. Existing proof rules for refinement are reformulated in our formalism. The notion of relative refinement is introduced to handle refinement of systems that only under certain conditions are considered to be correct refinements. The proof rules for “normal” refinement are extended to handle relative refinement of systems. Relative refinement is used to formalize Dijkstra’s development strategy for the solution of the readers/writers problem and to formalize a development strategy for certain fault tolerant systems. This development strategy is applied to the development of a fault tolerant storage system

    Some aspects of the efficient use of multiprocessor control systems

    Get PDF
    Computer technology, particularly at the circuit level, is fast approaching its physical limitations. As future needs for greater power from computing systems grows, increases in circuit switching speed (and thus instruction speed) will be unable to match these requirements. Greater power can also be obtained by incorporating several processing units into a single system. This ability to increase the performance of a system by the addition of processing units is one of the major advantages of multiprocessor systems. Four major characteristics of multiprocessor systems have been identified (28) which demonstrate their advantage. These are:- Throughput Flexibility Availability Reliability The additional throughput obtained from a multiprocessor has been mentioned above.. This increase in the power of the system can be obtained in a modular fashion with extra processors being added as greater processing needs arise. The addition of extra processors also has (in general) the desirable advantage of giving a smoother cost - performance curve ( 63). Flexibility is obtained from the increased ability to construct a system matching the user 'requirements at a given time without placing restrictions upon future expansion. With multiprocessor systems; the potential also exists of making greater use of the resources within the system. Availability and reliability are inter-related. Increased availability is achieved, in a well designed system, by ensuring that processing capabilities can be provided to the user even if one (or more) of the processing units has failed. The service provided, however, will probably be degraded due to the reduction in processing capacity. Increased reliability is obtained by the ability of the processing units to compensate for the failure of one of their number. This recovery may involve complex software checks and a consequent decrease in available power even when all the units are functioning

    Branching transactions: a transaction model for parallel database systems

    Get PDF
    • …
    corecore