18 research outputs found

    GLB: Lifeline-based Global Load Balancing library in X10

    Full text link
    We present GLB, a programming model and an associated implementation that can handle a wide range of irregular paral- lel programming problems running over large-scale distributed systems. GLB is applicable both to problems that are easily load-balanced via static scheduling and to problems that are hard to statically load balance. GLB hides the intricate syn- chronizations (e.g., inter-node communication, initialization and startup, load balancing, termination and result collection) from the users. GLB internally uses a version of the lifeline graph based work-stealing algorithm proposed by Saraswat et al. Users of GLB are simply required to write several pieces of sequential code that comply with the GLB interface. GLB then schedules and orchestrates the parallel execution of the code correctly and efficiently at scale. We have applied GLB to two representative benchmarks: Betweenness Centrality (BC) and Unbalanced Tree Search (UTS). Among them, BC can be statically load-balanced whereas UTS cannot. In either case, GLB scales well-- achieving nearly linear speedup on different computer architectures (Power, Blue Gene/Q, and K) -- up to 16K cores

    Supercharging the APGAS Programming Model with Relocatable Distributed Collections

    Full text link
    In this article we present our relocatable distributed collections library. Building on top of the AGPAS for Java library, we provide a number of useful intra-node parallel patterns as well as the features necessary to support the distributed nature of the computation through clearly identified methods. In particular, the transfer of distributed collections' entries between processes is supported via an integrated relocation system. This enables dynamic load-balancing capabilities, making it possible for programs to adapt to uneven or evolving cluster performance. The system we developed makes it possible to dynamically control the distribution and the data-flow of distributed programs through high-level abstractions. Programmers using our library can therefore write complex distributed programs combining computation and communication phases through a consistent API. We evaluate the performance of our library against two programs taken from well-known Java benchmark suites, demonstrating superior programmability, and obtaining better performance on one benchmark and reasonable overhead on the second. Finally, we demonstrate the ease and benefits of load-balancing and on a more complex application which uses the various features of our library extensively.Comment: 23 pages 8 figures Consult the source code in the GitHub repository at https://github.com/handist/collection

    Application Framework with Demand-Driven Mashup for Selective Browsing

    No full text
    We are developing a new mashup framework for creating flexible applications in which users can selectively browse through mashup items. The framework provides GUI components called widgets through which users can browse mashed-up data selectively, and the system processes demand-driven creation of mashed-up data upon receiving access requests through widgets. The application developer has to only prepare a configuration file that specifies how to combine web services and how to display mashed-up data. This paper proposes a revised widget model for effective data display, and introduces practical applications that allow selective browsing. The revision of the widget model is to accept various GUI components, process user interactions, and provide cooperative widgets. To avoid conflict with lazy data creation, we introduce properties into widgets that are automatically maintained by the system and can be monitored by other widgets. The case study through the applications shows the situations where the initially browsed data helps users to terminate redundant searches, set effective filter settings, or change the importance of the criteria. Some applications display synoptic information through columns, maps, or distribution charts; such information is useful for selective browsing

    Concurrent Object-Oriented Description Frameworks for Massively Parallel Computing

    No full text
    Introduction We studied the design, implementation and application for software systems on the massively parallel computer that is being designed and built at the RWCP. Our approach is based on the concurrent object-oriented paradigm. Last year we designed a concurrent object-oriented programming (COOP) language called ABCL/ST and this year we developed its language systems/environments including the optimizing compilers[6, 7, 4, 5], runtime systems[2, 3] and debuggers[1]. 2 Language Systems Improvements of the ABCL/ST compiler for EM-4 The main target machine of the ABCL/ST compiler is a fine-grained data-driven parallel computer, EM-4, developed at Electrotechnical Laboratories, which is regarded as an archi-prototype of the RWC-1 machine. We improved the previously developed compiler in terms of functionality, usability, modularity and optimization. A novel technique called the Plan-Do style compilation technique for eager date transfer[6, 7] ha
    corecore