51 research outputs found

    Lazy release consistency for software distributed shared memory

    Full text link

    Lazy Release Consistency for Software Distributed Shared Memory

    Get PDF
    Release consistency, a relaxed memory consistency model that reduces the impact of remote memory access latency in both software and hardware distributed shared memory, is considered. To reduce the number of messages and the amount of data exchanged for remote memory access, a lazy release consistency algorithm is introduced. It pulls modifications across the interconnect only when necessary. Trace-driven simulation using the SPLASH benchmarks indicates that lazy release consistency reduces both the number of messages and the amount of data transferred between processors. These reductions are especially significant for programs that exhibit false sharing and make extensive use of locks

    Deterministic Consistency: A Programming Model for Shared Memory Parallelism

    Full text link
    The difficulty of developing reliable parallel software is generating interest in deterministic environments, where a given program and input can yield only one possible result. Languages or type systems can enforce determinism in new code, and runtime systems can impose synthetic schedules on legacy parallel code. To parallelize existing serial code, however, we would like a programming model that is naturally deterministic without language restrictions or artificial scheduling. We propose "deterministic consistency", a parallel programming model as easy to understand as the "parallel assignment" construct in sequential languages such as Perl and JavaScript, where concurrent threads always read their inputs before writing shared outputs. DC supports common data- and task-parallel synchronization abstractions such as fork/join and barriers, as well as non-hierarchical structures such as producer/consumer pipelines and futures. A preliminary prototype suggests that software-only implementations of DC can run applications written for popular parallel environments such as OpenMP with low (<10%) overhead for some applications.Comment: 7 pages, 3 figure

    Towards compliant distributed shared memory

    Get PDF
    Copyright © 2002 IEEEThere exists a wide spectrum of coherency models for use in distributed shared memory (DSM) systems. The choice of model for an application should ideally be based on the application's data access patterns and phase changes. However, in current systems, most, if not all of the parameters of the coherency model are fixed in the underlying DSM system. This forces the application either to structure its computations to suit the underlying model or to endure an inefficient coherency model. This paper introduces a unique approach to the provision of DSM based on the idea of compliance. Compliance allows an application to specify how the system should most effectively operate through a separation between mechanism, provided by the underlying system, and policy, pro-vided by the application. This is in direct contrast with the traditional view that an application must mold itself to the hard-wired choices that its operating platform has made. The contribution of this work is the definition and implementation of an architecture for compliant distributed coherency management. The efficacy of this architecture is illustrated through a worked example.Falkner, K. E.; Detmold, H.; Munro, D. S.; Olds, T

    Modelos de consistencia y protocolos de coherencia en DVSM

    Get PDF
    Los sistemas de Memoria Compartida Distribuida (DSM) son el vehículo ideal para la programación paralela debido a las facilidades de programación que brinda la memoria compartida y a la escalabilidad de los sistemas distribuidos. El reto de construir un DSM es lograr una buena performance sobre un amplio espectro de programas paralelos sin requerir que los programadores reestructuren sus programas de memoria compartida. Por su parte, en la implementación por software de estos sistemas, del tipo DVSM, se tiene la tendencia a una gran cantidad de comunicación que se debe realizar entre procesadores para mantener consistente la memoria. Desde la creación de los primeros DVSM se han aplicado diversas alternativas para aliviar este cuello de botella en la performance. La mayoría de ellas se concentran en los modelos de consistencia de memoria, i.e. se encargan de definir como se ve la memoria compartida frente al programador, determinan la interface entre el programador y el sistema [11]. Una tendencia en estas alternativas es el empleo de modelos relajados, los cuales aumentan la complejidad del protocolo pero reducen el tráfico en la red mientras siguen manteniendo consistente la memoria. Ejemplo de ello es la lazy release consistency (LRC) [1] en TreadMarks [7] o la scope consistency (ScC) [2] en JIAJIA v1.1 [5]. Otras implementaciones tratan de reducir el tráfico refinando protocolos de coherencia de memoria, como el protocolo de migración de home en JIAJIA v2.1 [8] y el de home migratorio en JUMP [4].Eje: Redes, Arquitectura, Sistemas Distribuidos y Tiempo RealRed de Universidades con Carreras en Informática (RedUNCI

    Modelos de consistencia y protocolos de coherencia en DVSM

    Get PDF
    Los sistemas de Memoria Compartida Distribuida (DSM) son el vehículo ideal para la programación paralela debido a las facilidades de programación que brinda la memoria compartida y a la escalabilidad de los sistemas distribuidos. El reto de construir un DSM es lograr una buena performance sobre un amplio espectro de programas paralelos sin requerir que los programadores reestructuren sus programas de memoria compartida. Por su parte, en la implementación por software de estos sistemas, del tipo DVSM, se tiene la tendencia a una gran cantidad de comunicación que se debe realizar entre procesadores para mantener consistente la memoria. Desde la creación de los primeros DVSM se han aplicado diversas alternativas para aliviar este cuello de botella en la performance. La mayoría de ellas se concentran en los modelos de consistencia de memoria, i.e. se encargan de definir como se ve la memoria compartida frente al programador, determinan la interface entre el programador y el sistema [11]. Una tendencia en estas alternativas es el empleo de modelos relajados, los cuales aumentan la complejidad del protocolo pero reducen el tráfico en la red mientras siguen manteniendo consistente la memoria. Ejemplo de ello es la lazy release consistency (LRC) [1] en TreadMarks [7] o la scope consistency (ScC) [2] en JIAJIA v1.1 [5]. Otras implementaciones tratan de reducir el tráfico refinando protocolos de coherencia de memoria, como el protocolo de migración de home en JIAJIA v2.1 [8] y el de home migratorio en JUMP [4].Eje: Redes, Arquitectura, Sistemas Distribuidos y Tiempo RealRed de Universidades con Carreras en Informática (RedUNCI

    Fisheye Consistency: Keeping Data in Synch in a Georeplicated World

    Get PDF
    Over the last thirty years, numerous consistency conditions for replicated data have been proposed and implemented. Popular examples of such conditions include linearizability (or atomicity), sequential consistency, causal consistency, and eventual consistency. These consistency conditions are usually defined independently from the computing entities (nodes) that manipulate the replicated data; i.e., they do not take into account how computing entities might be linked to one another, or geographically distributed. To address this lack, as a first contribution, this paper introduces the notion of proximity graph between computing nodes. If two nodes are connected in this graph, their operations must satisfy a strong consistency condition, while the operations invoked by other nodes are allowed to satisfy a weaker condition. The second contribution is the use of such a graph to provide a generic approach to the hybridization of data consistency conditions into the same system. We illustrate this approach on sequential consistency and causal consistency, and present a model in which all data operations are causally consistent, while operations by neighboring processes in the proximity graph are sequentially consistent. The third contribution of the paper is the design and the proof of a distributed algorithm based on this proximity graph, which combines sequential consistency and causal consistency (the resulting condition is called fisheye consistency). In doing so the paper not only extends the domain of consistency conditions, but provides a generic provably correct solution of direct relevance to modern georeplicated systems
    corecore