153 research outputs found

    A fine-grain time-sharing Time Warp system

    Get PDF
    Although Parallel Discrete Event Simulation (PDES) platforms relying on the Time Warp (optimistic) synchronization protocol already allow for exploiting parallelism, several techniques have been proposed to further favor performance. Among them we can mention optimized approaches for state restore, as well as techniques for load balancing or (dynamically) controlling the speculation degree, the latter being specifically targeted at reducing the incidence of causality errors leading to waste of computation. However, in state of the art Time Warp systems, events’ processing is not preemptable, which may prevent the possibility to promptly react to the injection of higher priority (say lower timestamp) events. Delaying the processing of these events may, in turn, give rise to higher incidence of incorrect speculation. In this article we present the design and realization of a fine-grain time-sharing Time Warp system, to be run on multi-core Linux machines, which makes systematic use of event preemption in order to dynamically reassign the CPU to higher priority events/tasks. Our proposal is based on a truly dual mode execution, application vs platform, which includes a timer-interrupt based support for bringing control back to platform mode for possible CPU reassignment according to very fine grain periods. The latter facility is offered by an ad-hoc timer-interrupt management module for Linux, which we release, together with the overall time-sharing support, within the open source ROOT-Sim platform. An experimental assessment based on the classical PHOLD benchmark and two real world models is presented, which shows how our proposal effectively leads to the reduction of the incidence of causality errors, as compared to traditional Time Warp, especially when running with higher degrees of parallelism

    Process Management in Distributed Operating Systems

    Get PDF
    As part of designing and building the Amoeba distributed operating system, we have come up with a simple set of mechanisms for process management that allows downloading process migration, checkpointing, remote debugging and emulation of alien operating system interfaces.\ud The basic process management facilities are realized by the Amoeba Kernel and can be augmented by user-space services: Debug Service, Load-Balancing Service, Unix-Emulation Service, Checkpoint Service, etc.\ud The Amoeba Kernel can produce a representation of the state of a process which can be given to another Kernel where it is accepted for continued execution. This state consists of the memory contents in the form of a collection of segments, and a Process Descriptor which contains the additional state, program counters, stack pointers, system call state, etc.\ud Careful separation of mechanism and policy has resulted in a compact set of Kernel operations for process creation and management. A collection of user-space services provides process management policies and a simple interface for application programs.\ud In this paper we shall describe the mechanisms as they are being implemented in the Amoeba Distributed System at the Centre for Mathematics and Computer Science in Amsterdam. We believe that the mechanisms described here can also apply to other distributed systems

    A support for remote process execution in a load-balanced distributed system

    Get PDF
    Load distribution and balancing in a workstation-based network includes a number of intricate tasks. Among them, transparent remote process execution is an essential one. This work describes the main problems to be considered when implementing remote process execution and propose a design for an alternative system attempting to solve these problems.Eje: Sistemas distribuidosRed de Universidades con Carreras en Informática (RedUNCI

    Mentat: An object-oriented macro data flow system

    Get PDF
    Mentat, an object-oriented macro data flow system designed to facilitate parallelism in distributed systems, is presented. The macro data flow model is a model of computation similar to the data flow model with two principal differences: the computational complexity of the actors is much greater than in traditional data flow systems, and there are persistent actors that maintain state information between executions. Mentat is a system that combines the object-oriented programming paradigm and the macro data flow model of computation. Mentat programs use a dynamic structure called a future list to represent the future of computations

    Pre-run-time scheduling of distributed real-time systems : models and algorithms

    Get PDF

    Process migration in UNIX environments

    Get PDF
    To support process migration in UNIX environments, the main problem is how to encapsulate the location dependent features of the system in such a way that a host independent virtual environment is maintained by the migration handlers on the behalf of each migrated process. An object-oriented approach is used to describe the interaction between a process and its environment. More specifically, environmental objects were introduced in UNIX systems to carry out the user-environment interaction. The implementation of the migration handlers is based on both the state consistency criterion and the property consistency criterion

    A survey of process migration mechanisms

    Full text link
    • …
    corecore