11 research outputs found

    Design Considerations for Distributed Scientific Software Systems

    Get PDF

    Secure synthesis and activation of protocol translation agents

    Full text link
    Protocol heterogeneity is pervasive and is a major obstacle to effective integration of services in large systems. However, standardization is not a complete answer. Standardized protocols must be general to prevent a proliferation of standards, and can therefore become complex and inefficient. Specialized protocols can be simple and efficient, since they can ignore situations that are precluded by application characteristics. One solution is to maintain agents for translating between protocols. However, n protocol types would require agents, since an agent must exist for a source - destination pair. A better solution is to create agents as needed. This paper examines the issues in the creation and management of protocol translation agents. We focus on the design of Nestor, an environment for synthesizing and managing RPC protocol translation agents. We provide rationale for the translation mechanism and the synthesis environment, with specific emphasis on the security issues arising in Nestor. Nestor has been implemented and manages heterogeneous RPC agents generated using the Cicero protocol construction language and the URPC toolkit.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/49229/2/ds7402.pd

    Distributed C++ : Design and implementation

    Get PDF
    Distributed C++ is a learning tool developed to investigate distributed programming using the object paradigm. An extension is designed for C++, to enable the use of C++ in programming distributed applications. A user transparent interface is designed and implemented to create and manipulate remote objects on a network of workstations running the Unix operating system. The concept of remote classes is introduced and remote object invocation is implemented over a remote procedure call mechanism

    Software Packaging

    Get PDF
    Many computer programs cannot be easily integrated because their components are distributed and heterogeneous, i.e., they are implemented in diverse programming languages, use different data representation formats, or their runtime environments are incom patible. In many cases, programs are integrated by modifying their components or interposing mechanisms that handle communication and conversion tasks. For example, remote procedure call (RPC) helps integrate heterogeneous, distributed programs. When conf iguring such programs, however, mechanisms like RPC must be used explicitly by software developers in order to integrate collections of diverse components. Each collection may require a unique integration solution. This thesis describes a process called software packaging that automatically determines how to integrate a diverse collection of computer programs based on the types of components involved and the capabilities of available translators and adapters in an environment. Whereas previous efforts focused solely on integration mechanisms, software packaging provides a context that relates such mechanisms to software integration processes. We demonstrate the value of this approach by reducing the cost of configuring applications whose components are distributed and implemented in different programming languages. Our software packaging tool subsumes traditional integration tools like UNIX MAKE by providing a rule-based approach to software integration that is independent of execution environments. (Also cross-referenced as UMIACS-TR-93-56

    Models of higher-order, type-safe, distributed computation over autonomous persistent object stores

    Get PDF
    A remote procedure call (RPC) mechanism permits the calling of procedures in another address space. RPC is a simple but highly effective mechanism for interprocess communication and enjoys nowadays a great popularity as a tool for building distributed applications. This popularity is partly a result of their overall simplicity but also partly a consequence of more than 20 years of research in transpaxent distribution that have failed to deliver systems that meet the expectations of real-world application programmers. During the same 20 years, persistent systems have proved their suitability for building complex database applications by seamlessly integrating features traditionally found in database management systems into the programming language itself. Some research. effort has been invested on distributed persistent systems, but the outcomes commonly suffer from the same problems found with transparent distribution. In this thesis I claim that a higher-order persistent RPC is useful for building distributed persistent applications. The proposed mechanism is: realistic in the sense that it uses current technology and tolerates partial failures; understandable by application programmers; and general to support the development of many classes of distributed persistent applications. In order to demonstrate the validity of these claims, I propose and have implemented three models for distributed higher-order computation over autonomous persistent stores. Each model has successively exposed new problems which have then been overcome by the next model. Together, the three models provide a general yet simple higher-order persistent RPC that is able to operate in realistic environments with partial failures. The real strength of this thesis is the demonstration of realism and simplicity. A higherorder persistent RPC was not only implemented but also used by programmers without experience of programming distributed applications. Furthermore, a distributed persistent application has been built using these models which would not have been feasible with a traditional (non-persistent) programming language

    The Polylith Software Bus

    Get PDF
    We describe a system called POLYLITH that helps programmers prepare and interconnect mixedlanguage software components for execution in heterogeneous environments. POLYLITH'S principal benefit is that programmers are free to implement functional requirements separately from their treatment of interfacing requirements; this means that once an application has been developed for use in one execution environment (such as a distributed network) it can be adapted for reuse in other environments (such as a share d-memory multiprocessor) by automatic techniques. This flexibility is provided without loss of performance. We accomplish this by creating a new run-time organization for software. An abstract decoupling agent, called the software toolbus, is introduced between the system components. Heterogeneity in language and architecture is accommodated since program units are prepared to interface directly to the toolbus, not to other program units. Programmers specify application structure in terms of a module interconnection language (MIL); POLYLITH uses this specification to guide packaging (static interfacing acti vities such as stub generation, source program adaptation, compilation and linking). At run time, an implementation of the toolbus abstraction may assist in message delivery, name service or system reconfiguration. (Also cross-referenced as UMIACS-TR-90-65

    Toward Optimizing Distributed Programs Directed by Configurations

    Get PDF
    Networks of workstations are now viable environments for running distributed and parallel applications. Recent advances in software interconnection technology enables programmers to prepare applications to run in dynamically changing environments because module interconnection activity is regarded as an essentially distinct and different intellectual activity so as isolated from that of implementing individual modules. But there remains the question of how to optimize the performance of those applications for a given execution environment: how can developers realize performance gains without paying a high programming cost to specialize their application for the target environment? Interconnection technology has allowed programmers to tailor and tune their applications on distributed environments, but the traditional approach to this process has ignored the performance issue over gracefully seemless integration of various software components

    A Tiger Compiler for the Cell Broadband Engine Architecture

    Get PDF
    The modern computing industry tends to build integrated circuits with multiple energy-efficient cores instead of ramping up the clock speed for each single processing unit. While each core may not run as fast as the single core model, such architecture allows more jobs to be handled in parallel and also provides better overall performance. Asymmetric Multiprocessing, also known as Heterogeneous Multiprocessing, involves multiple processors that differ architecturally from one another, especially where each processor has its own memory space. Under power limitations, this design could provide better performance than that attained through symmetric multiprocessing. However, the heterogeneous nature adds difficulty to programming. Each specific architecture requires its own program code. Programmers also need to explicitly transfer code and data between processors. This study describes the implementation of a compiler of the pedagogic Tiger language for the Cell Broadband Engine, an asymmetric multiprocessing platform jointly developed by Sony, Toshiba and IBM. The problem above is solved by introducing multiple backends for the Tiger language, along with a remote call stub (RCS) generator. Functions are compiled into different architectures, and calls across architectures are linked automatically through the stubs. RCS takes care of the execution context switch and hides details of the argument data/return value transfer. TigC simplifies the programming and building procedures. It also provides a high-level view of the whole program execution for future optimization because all of the source files are processed by a single compiler. As an example of this procedure, the possible optimization of data transfer during remote calls is investigated here
    corecore