30 research outputs found

    Novel neighborhood search for multiprocessor scheduling with pipelining

    Get PDF
    Presents a neighborhood search algorithm for heterogeneous multiprocessor scheduling in which loop pipelining is used to exploit parallelism between iterations. The method adopts a realistic model for interprocessor communication where resource contention is taken into consideration. The schedule representation scheme is flexible so that communication scheduling can be performed in a generic manner. Based on a general time formulation of the schedule performance, the algorithm improves an initial schedule in an efficient way. Experimental results show that significant improvement over existing methods can be obtained. Using the scheduling results, a parallel software video encoder was implemented and real-time performance was achieved.published_or_final_versio

    Планирование вычислений в гетерогенных кластерных системах

    Get PDF
    The article proposes a universal approach for task scheduling, HETS, both for homogeneous and heterogeneous cluster systems. This approach is the combination of list and duplication scheduling methods. Software for implementation of the proposed and most well-known scheduling methods in heterogeneous computing cluster systems was developed. The results of researches, which are presented in the article, confirm the higher efficiency of the approach HETS in comparison with known methods, were presented.В статье предлагается универсальный подход HETS к планированию вычислений, как для однородных, так и неоднородных кластерных систем. Данный подход является комбинацией списочного метода и метода, использующего дублирование. Разработана программная модель для реализации предлагаемого и наиболее известных способов планирования вычислений в гетерогенных кластерных системах. Представлены результаты исследований, которые подтверждают более высокую эффективность подхода HETS по сравнению с известными способами

    A Survey on Parallel Architecture and Parallel Programming Languages and Tools

    Get PDF
    In this paper, we have presented a brief review on the evolution of parallel computing to multi - core architecture. The survey briefs more than 45 languages, libraries and tools used till date to increase performance through parallel programming. We ha ve given emphasis more on the architecture of parallel system in the survey

    Concurrent use of two programming tools for heterogeneous supercomputers

    Get PDF
    In this thesis, a demostration of the heterogeneous use of two programming paradigms for heterogeneous computing called Cluster-M and HAsC is presented. Both paradigms can efficiently support heterogeneous networks by preserving a level of abstraction which does not include any architecture mapping details. Furthermore, they are both machine independent and hence are scalable. Unlike, almost all existing heterogeneous orchestration tools which are MIMD based, HAsC is based on the fundamental concepts of SIMD associative computing. HAsC models a heterogeneous network as a coarse grained associative computer and is designed to optimize the execution of problems with large ratios of computations to instructions. Ease of programming and execution speed, not the utilization of idle resources are the primary goals of HAsC On the other hand, Cluster-M is a generic technique that can be applied to both coarse grained as well as fine grained networks. Cluster-M provides an environment for porting various tasks onto the machines in a heterogeneous suite such that resources utilization is maximized and the overall execution time is minimized. An illustration of how these two paradigms can be used together to provide an efficient medium for heterogeneous programming is included. Finally, their scalability is discussed

    Implementation of an automatic mapping tool for massively parallel computing

    Get PDF
    In this thesis, an implementation of a generic technique for fine grain mapping of portable parallel algorithms onto multiprocessor architectures is presented. The implemented mapping algorithm is a component of Cluster-M. Cluster-M is a novel parallel programming tool which facilitates the design and mapping of portable softwares onto various parallel systems. The other components of Cluster-M are the Specifications and the Representations. Using the Specifications, machine independent parallel algorithms are presented in a clustered fashion specifying the concurrent computations and communications at every step of the overall execution. The Representations, on the other hand, are a form of clustering the underlying architecture to simplify the mapping process. The mapping algorithm implemented and tested in this thesis is an efficient method for matching the Specification clusters to the Representation clusters

    A Comparison Study of Eleven Static Heuristics for Mapping a Class of Independent Tasks onto Ileterogeneous Distributed Computing Systems

    Get PDF
    ABSTRACT Il\u27lixed-machine heterogeneous computing (HC) environments utilize a distributed suite of different high-performance machines, interconnected with high-speed links to perform different computationally intensive applications that have diverse comput ational requirements. HC environments are well suited to meet thl: computational dell-tands of large, diverse groups of tasks. The problem of mapping (defined as matching and scheduling) these tasks onto the machines of a distributed HC environment has been shown, in general, to be NP-complete, requiring the development of heuristic techniques. Selecting the best heuristic to use in a given enviroi~menth, owever, remains a difficult problem, because comparisons are often clouded by different underlying assumptions in the original studies of each heuristic. There~fore; a collection of eleven heuristics from the literature has been selected: a,dapted, in~plementeda, nd anaiyzed under one set of common assumptions. It is assumed that the heuristics derive a, mapping statically (i.e., off-line). It is also assumed that a meta-task (i.e., a set of independent, non-communicating tasks) is being mapped, and that the goal is to minimize the total execution time of the metla-task. The eleven heuristics examined are Opportunistic Load Balancing, Minimum Execution Time, MininLlum Clompletion Time, Min-min, hllax-min, Duplex? Genetic i-Ilgorithm, Simulated Annealing, Genetic Simulat.ed .Annealing, Tabu, and Ax. This study provides one even basis for comparisor] and insights into circumstances where one technique will out perform another. The evaluation procedure is specified, the heuristics are defined, and then comparison results are discussed. It is shown that for the ca.ses studied here, the relat,ively simple Min-min heuristic performs well in comparison to the other techniques
    corecore