29 research outputs found

    Dynamic Load Balancing Based on Applications Global States Monitoring

    Get PDF
    8 pages à paraîtreInternational audienceThe paper presents how to use a special novel distributed program design framework with evolved global control mechanisms to assure processor load balancing during execution of application programs. The new framework supports a programmer with an API and GUI for automated graphical design of program execution control based on global application states monitoring. The framework provides highlevel distributed control primitives at process level and a special control infrastructure for global asynchronous execution control at thread level. Both kinds of control assume observations of current multicore processor performance and communication throughput enabled in the executive distributed system. Methods for designing processor load balancing control based on a system of program and system properties metrics and computational data migration between application executive processes is presented and assessed by experiments with execution of graph representations of distributed programs

    Scheduling Moldable Tasks for Dynamic {SMP} Clusters in {S}o{C} Technology

    Get PDF
    The paper presents an algorithm for scheduling parallel programs for execution in a parallel architecture based on dynamic SMP processor clusters with data transfers on the fly. The algorithm is based on the concept of moldable computational tasks. First, an initial program graph is decomposed into sub­graphs, which are then treated as moldable tasks. So identified moldable tasks are then scheduled using an algorithm with warranted schedule length

    Comparison of Program Task Scheduling Algorithms for Dynamic SMP Clusters with Communication on the Fly

    No full text
    International audienceThe paper presents comparison of the two scheduling algorithms developed for program structurization for execution in dynamic SMP clusters implemented in Systems on Chip (SoC) technology. SoC modules are built of a set of processors, memory modules and a multi-bus interconnection network. A set of such SoCs is interconnected by a global communication network. Inter-processor communication inside SoC modules uses a novel technique of data transfers on the fly. The algorithms present two different scheduling approaches. The first uses ETF-based genetically supported list scheduling heuristics to map nodes of a program to processors. The second is a clustering-based algorithm using Moldable Tasks (MT) to structure the graph. Both algorithms structure computations and local data transfers to introduce processor switching and data transfers on the fly. The algorithms were tested using a set of automatically generated parameterized program graphs. The results were compared to results obtained using a classic ETF-based list scheduling without data transmissions on the fly
    corecore