8 research outputs found

    Protocolos utilizando Memoria Compartida

    Get PDF
    Un sistema de memoria compartida consiste de una colección de procesos en comunicación, como en un sistema de red. Pero en vez de enviar y recibir mensajes sobre un canal de comunicación, los procesos realizan operaciones instantáneas sobre datos compartidos. Los modelos para un sistema de memoria compartida pueden ser sincrónicos y asincrónicos. En el modelo sincrónico se asume que los componentes toman/ejecutan los pasos simultáneamente, esto es, la ejecución procede en completo sincronismo. Se utiliza para realizar simulaciones de sistemas distribuidos reales.Eje: Ingeniería de software y base de datosRed de Universidades con Carreras en Informática (RedUNCI

    Protocolos utilizando Memoria Compartida

    Get PDF
    Un sistema de memoria compartida consiste de una colección de procesos en comunicación, como en un sistema de red. Pero en vez de enviar y recibir mensajes sobre un canal de comunicación, los procesos realizan operaciones instantáneas sobre datos compartidos. Los modelos para un sistema de memoria compartida pueden ser sincrónicos y asincrónicos. En el modelo sincrónico se asume que los componentes toman/ejecutan los pasos simultáneamente, esto es, la ejecución procede en completo sincronismo. Se utiliza para realizar simulaciones de sistemas distribuidos reales.Eje: Ingeniería de software y base de datosRed de Universidades con Carreras en Informática (RedUNCI

    Sobre algoritmos distribuidos de exclusión mutua para n procesos

    Get PDF
    Las aplicaciones distribuidas requieren compartir los recursos del sistema. Para hacer uso de los mismos, en algunos casos se necesita tener permiso para poder acceder y utilizarlos. El controlar el acceso a recursos que sólo pueden ser accedidos por un único proceso a la vez requiere de un protocolo de coordinación que garantice esta necesidad. Los algoritmos de exclusión mutua son los mecanismos utilizados para permitir el ingreso a la región que utiliza en forma exclusiva los recursos del sistema. Teniendo en cuenta las condiciones que debe presentar un algoritmo de este tipo se analizan diferentes protocolos y se presenta una alternativa del algoritmo del “Panadero” que satisface exclusión mutua. Los algoritmos están basados en el modelo de memoria compartida asincrónica con la utilización de variables de simple escritura y múltiple lectura.I Workshop de Procesamiento Distribuido y Paralelo (WPDP)Red de Universidades con Carreras en Informática (RedUNCI

    Sobre algoritmos distribuidos de exclusión mutua para n procesos

    Get PDF
    Las aplicaciones distribuidas requieren compartir los recursos del sistema. Para hacer uso de los mismos, en algunos casos se necesita tener permiso para poder acceder y utilizarlos. El controlar el acceso a recursos que sólo pueden ser accedidos por un único proceso a la vez requiere de un protocolo de coordinación que garantice esta necesidad. Los algoritmos de exclusión mutua son los mecanismos utilizados para permitir el ingreso a la región que utiliza en forma exclusiva los recursos del sistema. Teniendo en cuenta las condiciones que debe presentar un algoritmo de este tipo se analizan diferentes protocolos y se presenta una alternativa del algoritmo del “Panadero” que satisface exclusión mutua. Los algoritmos están basados en el modelo de memoria compartida asincrónica con la utilización de variables de simple escritura y múltiple lectura.I Workshop de Procesamiento Distribuido y Paralelo (WPDP)Red de Universidades con Carreras en Informática (RedUNCI

    Sharing memory in distributed systems

    Full text link
    We propose an algorithm for simulating atomic registers, test-and-set, fetch-and-add, and read-modify-write registers in a message passing system. The algorithm is fault tolerant and works correctly in presence of up to (N/2) -1 node failures where N is the number of processors in the system. The high resilience of the algorithm is obtained by using randomized consensus algorithms and a robust communication primitive. The use of this primitive allows a processor to exchange local information with a majority of processors in a consistent way, and therefore to take decisions safely. The simulator makes it possible to translate algorithms for the shared memory model to that for the message passing model. With some minor modifications the algorithm can be used to robustly simulate shared queues, shared stacks, etc. (Abstract shortened with permission of author.)

    On the Complexity of Implementing Certain Classes of Shared Objects

    Get PDF
    We consider shared memory systems in which asynchronous processes cooperate with each other by communicating via shared data objects, such as counters, queues, stacks, and priority queues. The common approach to implementing such shared objects is based on locking: To perform an operation on a shared object, a process obtains a lock, accesses the object, and then releases the lock. Locking, however, has several drawbacks, including convoying, priority inversion, and deadlocks. Furthermore, lock-based implementations are not fault-tolerant: if a process crashes while holding a lock, other processes can end up waiting forever for the lock. Wait-free linearizable implementations were conceived to overcome most of the above drawbacks of locking. A wait-free implementation guarantees that if a process repeatedly takes steps, then its operation on the implemented data object will eventually complete, regardless of whether other processes are slow, or fast, or have crashed. In this thesis, we first present an efficient wait-free linearizable implementation of a class of object types, called closed and closable types, and then prove time and space lower bounds on wait-free linearizable implementations of another class of object types, called perturbable types. (1) We present a wait-free linearizable implementation of n-process closed and closable types (such as swap, fetch&add, fetch&multiply, and fetch&L, where L is any of the boolean operations and, or, or complement) using registers that support load-link (LL) and store-conditional (SC) as base objects. The time complexity of the implementation grows linearly with contention, but is never more than O(log ^2 n). We believe that this is the first implementation of a class of types (as opposed to a specific type) to achieve a sub-linear time complexity. (2) We prove linear time and space lower bounds on the wait-free linearizable implementations of n-process perturbable types (such as increment, fetch&add, modulo k counter, LL/SC bit, k-valued compare&swap (for any k \u3e= n), single-writer snapshot) that use resettable consensus and historyless objects (such as registers that support read and write) as base objects. This improves on some previously known Omega(sqrt{n}) space complexity lower bounds. It also shows the near space optimality of some known wait-free linearizable implementations

    The Problem of Mutual Exclusion: A New Distributed Solution

    Get PDF
    In both centralized and distributed systems, processes cooperate and compete with each other to access the system resources. Some of these resources must be used exclusively. It is then required that only one process access the shared resource at a given time. This is referred to as the problem of mutual exclusion. Several synchronization mechanisms have been proposed to solve this problem. In this thesis, an effort has been made to compile most of the existing mutual exclusion solutions for both shared memory and message-passing based systems. A new distributed algorithm, which uses a dynamic information structure, is presented to solve the problem of mutual exclusion. It is proved to be free from both deadlock and starvation. This solution is shown to be economical in terms of the number of message exchanges required per critical section execution. Procedures for recovery from both site and link failures are also given

    Efficient Synchronization on Multiprocessors with Shared Memory

    No full text
    A new formalism is given for read-modify-write (RMW) synchronization operations. This formalism is used to extend the memory reference combining mechanism, introduced in the NYU Ultracomputer, to arbitrary RMW operations. A formal correctness proof of this combining mechanism is given. General requirements for the practicality of combining are discussed. Combining is shown to be practical for many useful memory access operations. This includes memory updates of the form mem_val := mem_val op val, where op need not be associative, and a variety of synchronization primitives. The computation involved is shown to be closely related to parallel prefix evaluation. 1. INTRODUCTION Shared memory provides convenient communication between processes in a tightly coupled multiprocessing system. Shared variables can be used for data sharing, information transfer between processes, and, in particular, for coordination and synchronization. Constructs such as the semaphore introduced by Dijkstra in ..
    corecore