4,843 research outputs found

    Simplified Distributed Programming with Micro Objects

    Full text link
    Developing large-scale distributed applications can be a daunting task. object-based environments have attempted to alleviate problems by providing distributed objects that look like local objects. We advocate that this approach has actually only made matters worse, as the developer needs to be aware of many intricate internal details in order to adequately handle partial failures. The result is an increase of application complexity. We present an alternative in which distribution transparency is lessened in favor of clearer semantics. In particular, we argue that a developer should always be offered the unambiguous semantics of local objects, and that distribution comes from copying those objects to where they are needed. We claim that it is often sufficient to provide only small, immutable objects, along with facilities to group objects into clusters.Comment: In Proceedings FOCLASA 2010, arXiv:1007.499

    A distributed programming environment for Ada

    Get PDF
    Despite considerable commercial exploitation of fault tolerance systems, significant and difficult research problems remain in such areas as fault detection and correction. A research project is described which constructs a distributed computing test bed for loosely coupled computers. The project is constructing a tool kit to support research into distributed control algorithms, including a distributed Ada compiler, distributed debugger, test harnesses, and environment monitors. The Ada compiler is being written in Ada and will implement distributed computing at the subsystem level. The design goal is to provide a variety of control mechanics for distributed programming while retaining total transparency at the code level

    The ISIS Project: Real Experience with a Fault Tolerant Programming System

    Get PDF
    The ISIS project has developed a distributed programming toolkit and a collection of higher level applications based on these tools. ISIS is now in use at more than 300 locations world-wise. The lessons (and surprises) gained from this experience with the real world are discussed

    Distributed Programming with Shared Data

    Get PDF
    Until recently, at least one thing was clear about parallel programming: tightly coupled (shared memory) machines were programmed in a language based on shared variables and loosely coupled (distributed) systems were programmed using message passing. The explosive growth of research on distributed systems and their languages, however, has led to several new methodologies that blur this simple distinction. Operating system primitives (e.g., problem-oriented shared memory, Shared Virtual Memory, the Agora shared memory) and languages (e.g., Concurrent Prolog, Linda, Emerald) for programming distributed systems have been proposed that support the shared variable paradigm without the presence of physical shared memory. In this paper we will look at the reasons for this evolution, the resemblances and differences among these new proposals, and the key issues in their design and implementation. It turns out that many implementations are based on replication of data. We take this idea one step further, and discuss how automatic replication (initiated by the run time system) can be used as a basis for a new model, called the shared data-object model, whose semantics are similar to the shared variable model. Finally, we discuss the design of a new language for distributed programming, Orca, based on the shared data-object model. 1

    Distributed Programming with Shared Data

    Get PDF

    Parallel and Distributed Programming in .NET

    Get PDF
    Import 05/08/2014Bakalářská práce se zabývá různými způsoby vytváření paralelních či distribuovaných aplikací se zaměřením na MPI. Dále popisuje vyvíjený nástroj Kaira, který je určen pro modelování paralelních - distribuovaných aplikací. Druhá část práce se zaměřuje na možnosti Cloud computingu, zejména pak na možné rozšíření nástroje Kaira o schopnost generování kódu pro cloudová řešení na platformě .NET.This bachelor thesis deals with different ways of creating parallel or distributed applications focusing on MPI. This thesis also describes developed tool named Kaira. Kaira is tool for modeling of parallel or distributed applications. Second part of thesis is focused on Cloud computing, especially of possible extension the tool Kaira for ability to generate code for cloud solutions based on .NET platform.460 - Katedra informatikydobř

    Experience with Distributed Programming in Orca

    Get PDF
    Orca is a language for programming parallel applications on distributed computing systems. Although processors in such systems communicate only through message passing and not through shared memory, data types and create instances (objects) of these types, which may be shared among processes. All operations on shared objects are executed atomically. Orca’s shared objects are implemented by replicating them in the local memories of the proces-sors. Read operations use the local copies of the object, without doing any interprocess communication. Write operations update all copies using an efficient reliable broadcast protocol. In this paper, we briefly describe the language and its implementation and then report on our ex-periences in using Orca for three parallel applications: the Traveling Salesman Problem, the All-pairs Shortest Paths problem, and Successive Overrelaxation. These applications have different needs for shared data: TSP greatly benefits from the support for shared data; ASP benefits from the use of broad-cast communication, even though it is hidden in the implementation; SOR merely requires point-to-point communication, but still can be implemented in the language by simulating message passing
    corecore