73,316 research outputs found

    DYNAMIC FILE MIGRATION IN DISTRIBUTED COMPUTER SYSTEMS

    Get PDF
    In a distributed computer system files are shared by both local users and remote users for query and update purposes. A user performing data processing activities tends to reference the same file for some time. When the referenced file is stored remotely, large amounts of communication traffic will be generated. For example, when a customer is making a travel plan, an airline reservation database might be accessed repeatedly by a remote operation site. The inquiries will probably all be made within the time of an ordinary telephone conversation. In many recent developments in distributed computer systems, file migration operations are incorporated into the procedures for processing remote file access requests. Using file migration operations a file may be duplicated or moved to the requesting site in order to reduce communication traffic. As a result, the system is faced with dynamic file placement decisions using a file migration policy. In particular, a file migration policy is expressed as the IF-THEN rules that specify the file migration operations to be implemented at each viable system state. Based on this policy, file migration operations are triggered when the specified conditions are satisfied, and thus dynamically respond to system needs. Because of the dynamic behaviors of systems, the problem of deriving effective file migration policies is extremely complex. An elaborate analysis is required. This paper studies the impact of file migration operations on system performance and develops automatic mechanisms for incorporating file migrations as part of system operations. The mechanisms include optimization models formulated in the form of Markov decision models for deriving optimal file migration policies at system design or redesign points, and heuristic rules to generate adaptive file migration decisions for individual file access requests. The trade-off between these two types of mechanisms is clearly that of performance levels versus implementation complexities. The optimization analysis not only generates the best possible solutions, but provides insight into the problem structure, whereas the rationale for developing heuristics is their simplicity in implementation and acceptable performance levels

    An Infrastructure for the Dynamic Distribution of Web Applications and Services

    Full text link
    This paper presents the design and implementation of an infrastructure that enables any Web application, regardless of its current state, to be stopped and uninstalled from a particular server, transferred to a new server, then installed, loaded, and resumed, with all these events occurring "on the fly" and totally transparent to clients. Such functionalities allow entire applications to fluidly move from server to server, reducing the overhead required to administer the system, and increasing its performance in a number of ways: (1) Dynamic replication of new instances of applications to several servers to raise throughput for scalability purposes, (2) Moving applications to servers to achieve load balancing or other resource management goals, (3) Caching entire applications on servers located closer to clients.National Science Foundation (9986397

    The Living Application: a Self-Organising System for Complex Grid Tasks

    Full text link
    We present the living application, a method to autonomously manage applications on the grid. During its execution on the grid, the living application makes choices on the resources to use in order to complete its tasks. These choices can be based on the internal state, or on autonomously acquired knowledge from external sensors. By giving limited user capabilities to a living application, the living application is able to port itself from one resource topology to another. The application performs these actions at run-time without depending on users or external workflow tools. We demonstrate this new concept in a special case of a living application: the living simulation. Today, many simulations require a wide range of numerical solvers and run most efficiently if specialized nodes are matched to the solvers. The idea of the living simulation is that it decides itself which grid machines to use based on the numerical solver currently in use. In this paper we apply the living simulation to modelling the collision between two galaxies in a test setup with two specialized computers. This simulation switces at run-time between a GPU-enabled computer in the Netherlands and a GRAPE-enabled machine that resides in the United States, using an oct-tree N-body code whenever it runs in the Netherlands and a direct N-body solver in the United States.Comment: 26 pages, 3 figures, accepted by IJHPC
    • …
    corecore