6 research outputs found

    FAULT TOLERANCE IN MOBILE GRID COMPUTING

    Full text link

    A practical approach to dynamic load balancing

    Full text link

    Dynamic load balancing via thread migration

    Get PDF
    Light-weight threads are becoming increasingly useful for parallel processing. This is particularly true for threads running in a distributed memory environment. Light-weight threads can be used to support latency hiding techniques, communication and computation overlap, and functional parallelism. Additionally, dynamic migration of light-weight threads supports both data locality and load balancing. Designing a thread migration mechanism presents some very unique and interesting challenges. One such challenge is maintaining communication between mobile threads. A potentially more difficult challenge involves maintaining the correctness of pointers within mobile threads. Since traditional pointers have no concept of address space, moving threads from processor to processor has a strong impact on the use of pointers. Options for dealing with pointers include restricting their use, adding a layer of software to support pointers referencing non-local data, and binding data to threads such that referenced data is always local to the thread. This dissertation presents the design and implementation of Chant, an efficient light-weight threads package which runs in a distributed memory environment. Chant was designed and implemented as a runtime system using MPI like and Pthreads like calls. Chant supports point-to-point message passing between threads executing in distributed address spaces. We focus on the use of Chant as a framework to support dynamic load balancing based on thread migration. We explore many of the issues which arise when designing and implementing a thread migration mechanism, as well as the issues which arise when considering the use of thread migration as a means for performing dynamic load balancing. This load balancing framework uses both system state information, including communication history, and user input. One of the basic functionalities of this load balancing framework is the ability of the user to customize the load balancing to fit particular classes of problems. This dissertation provides implementation details as well as discussion and justification of design choices. We go on to show that the overhead associated with our approach is within an acceptable range, and that significant performance gains can be achieved through the use of thread migration as a means of performing dynamic load balancing

    Monitorable network and CPU load statistics and their application to scheduling

    Get PDF
    Recent trends in high-speed computing have moved towards the use of networks of workstations as a cost-effective approach to parallel computing. One recently proposed solution involves the use of an existing network of workstation-class computers as a single multiprocessor, and much research is ongoing in this area;This dissertation describes work in the area of process scheduling on networks of workstations, specifically in the area of load analysis. After presenting extensive background in the field, measures of CPU and network load are defined, and a test parallel application program presented, written for a network-multiprocessing software package called PVM. A series of experiments is then detailed, whose goal was to discover the relationship between the run time of the test application and the loads on the participating workstations and networks. The experiments include measurement of CPU loading and network loading, both during test application runs, during artificially elevated loads, and during quiet conditions. Results of the experiments are presented, and the applications of the results to the problem of task scheduling examined. It is then claimed that several easily measured load measures are useful to task scheduling, by allowing run time to be predicted within a margin of error, and allowing limiting network segments to be detected and avoided

    Transparent and adaptive application partitioning using mobile objects

    Get PDF
    The dynamic nature and heterogeneity of modern execution environments such as mobile, ubiquitous, and grid computing, present major challenges for the development and efficient execution of the applications targeted for these environments. In particular, applications tailored to run in a specific environment will show different and most likely sub-optimal behaviour when executed on a different and/or dynamic environment. Consequently, there has been growing interests in the area of application adaptation which aims to enable applications to cope with the varying execution environments. Adaptive application partitioning, a specific form of non-functional adaptation involving distribution of mobile objects across multiple host machines, is of particular interest to this thesis due to the diversity of its uses. In this approach, certain runtime information (known as context) is used to allow an object-oriented application to adaptively (re)adjust the placement of its objects during its execution, for purposes such as improving application performance and reliability as well as balancing resource utilisation across machines. Promoting the adoption of such adaptation requires a process that requires minimal human involvement in both the execution and the development of the relevant application. These challenges establish the main goals and contributions of this work, which include: 1) Proposing an effective application partitioning solution via the adoption of a decentralised adaptation strategy known as local adaptation. 2) Enabling adaptive application partitioning which does not require human intervention, through automatic collection of required information/context. 3) Proposing a solution for transparently injecting the required adaptation functionality into regular object-oriented applications allowing significant reduction of the associated development cost/effort. The proposed solutions have been implemented in a Java-based adaptation framework called MobJeX. This implementation, which was used as a test bed for the empirical experiments undertaken in this study, can be used to facilitate future research relevant to this particular study

    Adaptive Load Migration Systems for PVM

    No full text
    Adaptive load distribution is necessary for parallel applications to co-exist effectively with other jobs in a network of shared heterogeneous workstations. We present three methods that provide such support for PVM applications. Two of these methods, MPVM and UPVM, adapt to changes in the workstation environment by transparently migrating the virtual processors (VPs) of the parallel application. A VP in MPVM is a Unix process, while UPVM defines light-weight, process-like VPs. The third method, ADM, is a programming methodology for writing programs that perform adaptive load distribution through data movement. These methods are discussed and compared in terms of effectiveness, usability, and performance. Adaptive Load Migration Systems for PVM 2 of 23 1.0 Introduction Message-passing systems such as PVM [1-3] allow a heterogeneous network of parallel and serial computers to be programmed as a single computational resource. This resource appears to the application programmer as a d..
    corecore