5 research outputs found
Exploitation of Task Level Parallelism
Existing many systems were supporting task level parallelism usually involving the process of task creation and synchronization. The synchronization of task requires the clear definition about existing dependencies in a program or data-flow restraints among functions(tasks), or data usable information of the tasks. This thesis describes a method called Symbol-Table method which will used to exploits and detects the task level parallelism at inner level of sequential C-programs. This method is made up of two levels: a normal symbol table and an extended symbol table. A sequential program of C language is the input to the normal gcc compiler in which the procedures are defines as functions(tasks). Than we generate a normal symbol table with specific command by gcc compiler in Linux as an output. Then we use the information of that symbol table for generating the extended symbol table with additional information about variable's extended scopes and inner level function dependency. This extended symbol table is generated by the use of previously generated normal symbol table on the basis of variable's scope and L-value/R-value attributes. By that table we can identify the functions and variables those who are sharing the common variables and those who are accessing the different functions with extended scopes respectively. Then we can generate the program dependency graph by the using of that extended symbol table's information with a specific java program. A simple program for using this method has been implemented on a 64-bits linux based multiprocessor. Finally we can generate the Function graph for every variable in the program with the help of table's info and dependency graph's states. From that graph we can get the info about extended scoping of variables to identify and exploit the task level parallelism in the program. Then we can apply the parallelism with MPI or other parallel platforms to get optimized and error free parallelism
Recommended from our members
Architecture of the parallel programming support environment
The Parallel Programming Support Environment (PPSE) is an experimental integrated set of tools for the design and construction of large software systems to run on parallel computers. The tools include a graphical de.sign editor, a graphical target machine description system, a task mapper/scheduler tool, parallel code generator, and graphical aids for performance analysis. The objective is, to the extent possible, to design and develop parallel software with little regard for the details of the architecture of the target ma.chine, programming language, or parallel computing paradigm that the program is to use
Recommended from our members
Parallel programming and designing in object oriented environment SS/1
Parallel software development requires the flexibility to describe algorithms regardless of hardware specification, the ability to accommodate existing applications. and maintainability throughout the software life cycle. We propose the following model to address these issues. Our model incorporates aspects of the object-oriented and large grain data flow programming paradigms, and introduces a concept called a "Server". "Servers" are objects as well as self-contained processes which communicate with each other by sending messages. The server paradigm considers all components of a program as servers. This concept helps in designing flexible and dynamically reconfigurable software. The major goals of the server model are reusability, maintainability, and productivity. These are realized through encapsulation, instantiation, and inheritance features of the server model, as well as a graphical design environment with the capability of tracing and debugging the user's design based on the data flow information
Dataflow development of medium-grained parallel software
PhD ThesisIn the 1980s, multiple-processor computers (multiprocessors) based on conven-
tional processing elements emerged as a popular solution to the continuing demand
for ever-greater computing power. These machines offer a general-purpose parallel
processing platform on which the size of program units which can be efficiently
executed in parallel - the "grain size" - is smaller than that offered by distributed
computing environments, though greater than that of some more specialised
architectures. However, programming to exploit this medium-grained parallelism
remains difficult. Concurrent execution is inherently complex, yet there is a lack of
programming tools to support parallel programming activities such as program
design, implementation, debugging, performance tuning and so on.
In helping to manage complexity in sequential programming, visual tools have
often been used to great effect, which suggests one approach towards the goal of
making parallel programming less difficult.
This thesis examines the possibilities which the dataflow paradigm has to offer
as the basis for a set of visual parallel programming tools, and presents a dataflow
notation designed as a framework for medium-grained parallel programming. The
implementation of this notation as a programming language is discussed, and its
suitability for the medium-grained level is examinedScience and Engineering Research Council of Great Britain
EC ERASMUS schem