52 research outputs found

    Monitorable network and CPU load statistics and their application to scheduling

    Get PDF
    Recent trends in high-speed computing have moved towards the use of networks of workstations as a cost-effective approach to parallel computing. One recently proposed solution involves the use of an existing network of workstation-class computers as a single multiprocessor, and much research is ongoing in this area;This dissertation describes work in the area of process scheduling on networks of workstations, specifically in the area of load analysis. After presenting extensive background in the field, measures of CPU and network load are defined, and a test parallel application program presented, written for a network-multiprocessing software package called PVM. A series of experiments is then detailed, whose goal was to discover the relationship between the run time of the test application and the loads on the participating workstations and networks. The experiments include measurement of CPU loading and network loading, both during test application runs, during artificially elevated loads, and during quiet conditions. Results of the experiments are presented, and the applications of the results to the problem of task scheduling examined. It is then claimed that several easily measured load measures are useful to task scheduling, by allowing run time to be predicted within a margin of error, and allowing limiting network segments to be detected and avoided

    A Migratable User-Level Process Package for PVM

    Get PDF
    Shared, multi-user, workstation networks are characterized by unpredictable variability in system load. Further, the concept of workstation ownership is typically present. For efficient and unobtrusive computing in such environments, applications must not only overlap their computation with communication but also redistribute their computations adaptively based on changes in workstation availability and load. Managing these issues at application level leads to programs that are difficult to write and debug. In this paper, we present a system that manages this dynamic multi-processor environment while exporting a simple message-based programming model of a dedicated, distributed memory multiprocessor to applications. Programmers are thus insulated from the many complexities of the dynamic environment at the same time are able to achieve the benefits of multi-threading, adaptive load distribution and unobtrusive computing. To support the dedicated multi-processor model efficiently, the system defines a new kind of virtual processor called User-Level Process (ULP) that can be used to implement efficient multi-threading and application-transparent migration. The viability of ULPs is demonstrated through UPVM, a prototype implementation of the PVM message passing interface using ULPs. Typically, existing PVM programs written in Single Program Multiple Data (SPMD) style need only be re-compiled to use this package. The design of the package is presented and the performance analyzed with respect to both micro-benchmarks and some complete PVM applications. Finally, we discuss aspects of the ULP package that affect its portability and its support for heterogeneity, application transparency, and application debugging

    Illinois Cluster Manager

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems Laborator

    Generalized plasticity for geomaterials with double structure

    Get PDF
    The paper presents a double structure constitutive model based on a generalized plasticity formalism. The behaviour of macrostructure, microstructure and their interactions are described. A coupled hydromechanical formulation is then presented that assumes no hydraulic equilibrium between structural levels. Constitutive law and formulation are applied to the simulation of the behaviour during hydration of a heterogeneous mixture of bentonite powder and bentonite pellets. A satisfactory reproduction of observed behaviour is achieved.Postprint (published version

    Data Parallel Programming in an Adaptive Environment

    Get PDF
    For better utilization of computing resources, it is important to consider parallel programming environments in which the number of available processors varies at runtime. In this paper, we discuss runtime support for data parallel programming in such an adaptive environment. Executing data parallel programs in an adaptive environment requires redistributing data when the number of processors changes, and also requires determining new loop bounds and communication patterns for the new set of processors. We have developed a runtime library to provide this support. We discuss how the runtime library can be used by compilers to generate code for an adaptive environment. We also present performance results for a multiblock Navier-Stokes solver run on a network of workstations using PVM for message passing. Our experiments show that if the number of processors is not varied frequently, the cost of data redistribution is not significant compared to the time required for the actual computations. (Also cross-referenced as UMIACS-TR-94-109

    A parallel iterative linear system solver with dynamic load balancing

    Full text link

    Алгоритмы динамической балансировки вычислительной нагрузки и их реализации

    Get PDF
    В работе рассматриваются алгоритмы балансировки вычислительной нагрузки в задачах связанных с адаптивными перестроениями сетки, локальными повышениями порядка аппроксимирующих функций выполняемых на многопроцессорных вычислительных системах, в том числе неоднородных/гибридных. Рассматривается балансировка на уровне: операционной системы, промежуточного программного обеспечения и пользовательского приложения.20-6

    Compiler and Runtime Support for Programming in Adaptive Parallel Environments

    Get PDF
    For better utilization of computing resources, it is important to consider parallel programming environments in which the number of available processors varies at runtime. In this paper, we discuss runtime support for data parallel programming in such an adaptive environment. Executing programs in an adaptive environment requires redistributing data when the number of processors changes, and also requires determining new loop bounds and communication patterns for the new set of processors. We have developed a runtime library to provide this support. We discuss how the runtime library can be used by compilers of HPF-like languages to generate code for an adaptive environment. We present performance results for a Navier-Stokes solver and a multigrid template run on a network of workstations and an IBM SP-2. Our experiments show that if the number of processors is not varied frequently, the cost of data redistribution is not significant compared to the time required for the actual computation. Overall, our work establishes the feasibility of compiling HPF for a network of non-dedicated workstations, which are likely to be an important resource for parallel programming in the future. (Also cross-referenced as UMIACS-TR-95-83
    corecore