5 research outputs found

    Exploitation of Task Level Parallelism

    Get PDF
    Existing many systems were supporting task level parallelism usually involving the process of task creation and synchronization. The synchronization of task requires the clear definition about existing dependencies in a program or data-flow restraints among functions(tasks), or data usable information of the tasks. This thesis describes a method called Symbol-Table method which will used to exploits and detects the task level parallelism at inner level of sequential C-programs. This method is made up of two levels: a normal symbol table and an extended symbol table. A sequential program of C language is the input to the normal gcc compiler in which the procedures are defines as functions(tasks). Than we generate a normal symbol table with specific command by gcc compiler in Linux as an output. Then we use the information of that symbol table for generating the extended symbol table with additional information about variable's extended scopes and inner level function dependency. This extended symbol table is generated by the use of previously generated normal symbol table on the basis of variable's scope and L-value/R-value attributes. By that table we can identify the functions and variables those who are sharing the common variables and those who are accessing the different functions with extended scopes respectively. Then we can generate the program dependency graph by the using of that extended symbol table's information with a specific java program. A simple program for using this method has been implemented on a 64-bits linux based multiprocessor. Finally we can generate the Function graph for every variable in the program with the help of table's info and dependency graph's states. From that graph we can get the info about extended scoping of variables to identify and exploit the task level parallelism in the program. Then we can apply the parallelism with MPI or other parallel platforms to get optimized and error free parallelism

    Dataflow development of medium-grained parallel software

    Get PDF
    PhD ThesisIn the 1980s, multiple-processor computers (multiprocessors) based on conven- tional processing elements emerged as a popular solution to the continuing demand for ever-greater computing power. These machines offer a general-purpose parallel processing platform on which the size of program units which can be efficiently executed in parallel - the "grain size" - is smaller than that offered by distributed computing environments, though greater than that of some more specialised architectures. However, programming to exploit this medium-grained parallelism remains difficult. Concurrent execution is inherently complex, yet there is a lack of programming tools to support parallel programming activities such as program design, implementation, debugging, performance tuning and so on. In helping to manage complexity in sequential programming, visual tools have often been used to great effect, which suggests one approach towards the goal of making parallel programming less difficult. This thesis examines the possibilities which the dataflow paradigm has to offer as the basis for a set of visual parallel programming tools, and presents a dataflow notation designed as a framework for medium-grained parallel programming. The implementation of this notation as a programming language is discussed, and its suitability for the medium-grained level is examinedScience and Engineering Research Council of Great Britain EC ERASMUS schem
    corecore