155 research outputs found

    Single system image: A survey

    Get PDF
    Single system image is a computing paradigm where a number of distributed computing resources are aggregated and presented via an interface that maintains the illusion of interaction with a single system. This approach encompasses decades of research using a broad variety of techniques at varying levels of abstraction, from custom hardware and distributed hypervisors to specialized operating system kernels and user-level tools. Existing classification schemes for SSI technologies are reviewed, and an updated classification scheme is proposed. A survey of implementation techniques is provided along with relevant examples. Notable deployments are examined and insights gained from hands-on experience are summarized. Issues affecting the adoption of kernel-level SSI are identified and discussed in the context of technology adoption literature

    Overlapping of Communication and Computation and Early Binding: Fundamental Mechanisms for Improving Parallel Performance on Clusters of Workstations

    Get PDF
    This study considers software techniques for improving performance on clusters of workstations and approaches for designing message-passing middleware that facilitate scalable, parallel processing. Early binding and overlapping of communication and computation are identified as fundamental approaches for improving parallel performance and scalability on clusters. Currently, cluster computers using the Message-Passing Interface for interprocess communication are the predominant choice for building high-performance computing facilities, which makes the findings of this work relevant to a wide audience from the areas of high-performance computing and parallel processing. The performance-enhancing techniques studied in this work are presently underutilized in practice because of the lack of adequate support by existing message-passing libraries and are also rarely considered by parallel algorithm designers. Furthermore, commonly accepted methods for performance analysis and evaluation of parallel systems omit these techniques and focus primarily on more obvious communication characteristics such as latency and bandwidth. This study provides a theoretical framework for describing early binding and overlapping of communication and computation in models for parallel programming. This framework defines four new performance metrics that facilitate new approaches for performance analysis of parallel systems and algorithms. This dissertation provides experimental data that validate the correctness and accuracy of the performance analysis based on the new framework. The theoretical results of this performance analysis can be used by designers of parallel system and application software for assessing the quality of their implementations and for predicting the effective performance benefits of early binding and overlapping. This work presents MPI/Pro, a new MPI implementation that is specifically optimized for clusters of workstations interconnected with high-speed networks. This MPI implementation emphasizes features such as persistent communication, asynchronous processing, low processor overhead, and independent message progress. These features are identified as critical for delivering maximum performance to applications. The experimental section of this dissertation demonstrates the capability of MPI/Pro to facilitate software techniques that result in significant application performance improvements. Specific demonstrations with Virtual Interface Architecture and TCP/IP over Ethernet are offered

    Remote Memory Access: A Case for Portable, Efficient and Library Independent Parallel Programming

    Get PDF

    Parallel Computing in Java

    Get PDF
    The Java programming language and environment is inspiring new research activities in many areas of computing, of which parallel computing is one of the major interests. Parallel techniques are themselves finding new uses in cluster computing systems. Although there are excellent software tools for scheduling, monitoring and message-based programming on parallel clusters, these systems are not yet well integrated and do not provide very high-level parallel programming support. This research presents a number of issues which are considered to be key to the suitability of Java for HPC (High Performance Computing) applications and then explore the support for concurrency in the current Java 1.8 specification. We further present various relatively recent parallel Java models which support HPC for both shared and distributed memory programming paradigms. Finally, we attempt to evaluate the performance of discussed Java HPC models by comparing the same with the relative traditional native C implementations, where appropriate. The analysis of the results suggest that Java can achieve near similar performance to natively compiled languages, both for sequential and parallel applications, thus making it a viable alternative for HPC programming

    Simulation of the electromagnetic vector field in the high-frequency components with dispersive materials

    Get PDF
    This paper deals with parallel analysis of an electromagnetic problem for frequency-dependent materials. The solution of the linear wave equation is achieved using the finite element time domain method. A multi-pole Debye model approximates the memory of the dispersive material. The time-domain convolution is explicitly calculated using the integration by parts method. The linear recursive technique is implemented to estimate the convolution integral in each time step. The comparative study of two parallel implementations of the algorithm is presented. The discussed parallel versions are based on the domain decomposition paradigm and task decomposition scheme. The task decomposition is developed using some heuristic scheme of coupling of parallel tasks. The presented algorithms are executed over a distributed, homogeneous testbed. Some aspects connected with simulation of a broadband electromagnetic field in a dispersive and isotropic material are discussed

    Asymmetric Load Balancing on a Heterogeneous Cluster of PCs

    Get PDF
    In recent years, high performance computing with commodity clusters of personal computers has become an active area of research. Many organizations build them because they need the computational speedup provided by parallel processing but cannot afford to purchase a supercomputer. With commercial supercomputers and homogenous clusters of PCs, applications that can be statically load balanced are done so by assigning equal tasks to each processor. With heterogeneous clusters, the system designers have the option of quickly adding newer hardware that is more powerful than the existing hardware. When this is done, the assignment of equal tasks to each processor results in suboptimal performance. This research addresses techniques by which the size of the tasks assigned to processors is a suitable match to the processors themselves, in which the more powerful processors can do more work, and the less powerful processors perform less work. We find that when the range of processing power is narrow, some benefit can be achieved with asymmetric load balancing. When the range of processing power is broad, dramatic improvements in performance are realized our experiments have shown up to 92% improvement when asymmetrically load balancing a modified version of the NAS Parallel Benchmarks\u27 LU application

    Infrastructure Plan for ASC Petascale Environments

    Full text link
    • …
    corecore