10 research outputs found

    A Survey on Parallel Architecture and Parallel Programming Languages and Tools

    Get PDF
    In this paper, we have presented a brief review on the evolution of parallel computing to multi - core architecture. The survey briefs more than 45 languages, libraries and tools used till date to increase performance through parallel programming. We ha ve given emphasis more on the architecture of parallel system in the survey

    Generation of distributed supervisors for parallel compilers

    Get PDF
    This paper presents a new approach towards solving the combination and communication problems between different compiler tasks. As optimizations may generate as well as destroy application conditions of other tasks a carefully chosen application order is important for the effectiveness of the compiler system. Each task is solved by exactly one implementation (engine) and is characterized by its input-output behaviour and an optional heuristics. The specification of all tasks in this manner allows the generation of distributed supervisors for the whole compilation system. The result is a clear semantics of the compiler behaviour during compilation and the separation of algorithm and communication. Software engineering advantages are the easy integration of independently developed parts and the reusability of code. The flexibility of such a compiler system results in high portability even across hardware architectures and topologies

    An extensible language for the generation of parallel data manipulation andcontrol packages

    Get PDF
    The design and implementation of the language fSDL (full Structure Definition Language) is discussed. In fSDL, complex user-defined data types such as lists, tables, trees, and graphs can be constructed from a tiny set of primitives. Beyond mere structure definitions (also offered by previously existing tools) high-level functionality on these data types can be specified. In the COMPARE (ESPRIT) project, the C code generated from an fSDL specification will be used by compiler-components running in parallel on a common data pool. fSDL is first translated into a sublanguage, flat fSDL, from which the actual C code is produced. Flat fSDL is a convenient interface for cooperation with other compiler generation tools. There is a formal relation between the input fSDL and the resulting flat form

    Exploitation of Task Level Parallelism

    Get PDF
    Existing many systems were supporting task level parallelism usually involving the process of task creation and synchronization. The synchronization of task requires the clear definition about existing dependencies in a program or data-flow restraints among functions(tasks), or data usable information of the tasks. This thesis describes a method called Symbol-Table method which will used to exploits and detects the task level parallelism at inner level of sequential C-programs. This method is made up of two levels: a normal symbol table and an extended symbol table. A sequential program of C language is the input to the normal gcc compiler in which the procedures are defines as functions(tasks). Than we generate a normal symbol table with specific command by gcc compiler in Linux as an output. Then we use the information of that symbol table for generating the extended symbol table with additional information about variable's extended scopes and inner level function dependency. This extended symbol table is generated by the use of previously generated normal symbol table on the basis of variable's scope and L-value/R-value attributes. By that table we can identify the functions and variables those who are sharing the common variables and those who are accessing the different functions with extended scopes respectively. Then we can generate the program dependency graph by the using of that extended symbol table's information with a specific java program. A simple program for using this method has been implemented on a 64-bits linux based multiprocessor. Finally we can generate the Function graph for every variable in the program with the help of table's info and dependency graph's states. From that graph we can get the info about extended scoping of variables to identify and exploit the task level parallelism in the program. Then we can apply the parallelism with MPI or other parallel platforms to get optimized and error free parallelism

    DRAFT : Task System and Item Architecture (TSIA)

    Full text link
    During its execution, a task is independent of all other tasks. For an application which executes in terms of tasks, the application definition can be free of the details of the execution. Many projects have demonstrated that a task system (TS) can provide such an application with a parallel, distributed, heterogeneous, adaptive, dynamic, real-time, interactive, reliable, secure or other execution. A task consists of items and thus the application is defined in terms of items. An item architecture (IA) can support arrays, routines and other structures of items, thus allowing for a structured application definition. Taking properties from many projects, the support can extend through to currying, application defined types, conditional items, streams and other definition elements. A task system and item architecture (TSIA) thus promises unprecedented levels of support for application execution and definition.Comment: vii+244 pages, including 126 figures of diagrams and code examples. Submitted to Springer Verlag. For further information see http://www.tsia.or

    Hardware design of task superscalar architecture

    Get PDF
    Exploiting concurrency to achieve greater performance is a difficult and important challenge for current high performance systems. Although the theory is plain, the complexity of traditional parallel programming models in most cases impedes the programmer to harvest performance. Several partitioning granularities have been proposed to better exploit concurrency at task granularity. In this sense, different dynamic software task management systems, such as task-based dataflow programming models, benefit dataflow principles to improve task-level parallelism and overcome the limitations of static task management systems. These models implicitly schedule computation and data and use tasks instead of instructions as a basic work unit, thereby relieving the programmer of explicitly managing parallelism. While these programming models share conceptual similarities with the well-known Out-of-Order superscalar pipelines (e.g., dynamic data dependency analysis and dataflow scheduling), they rely on software-based dependency analysis, which is inherently slow, and limits their scalability when there is fine-grained task granularity and a large amount of tasks. The aforementioned problem increases with the number of available cores. In order to keep all the cores busy and accelerate the overall application performance, it becomes necessary to partition it into more and smaller tasks. The task scheduling (i.e., creation and management of the execution of tasks) in software introduces overheads, and so becomes increasingly inefficient with the number of cores. In contrast, a hardware scheduling solution can achieve greater speed-ups as a hardware task scheduler requires fewer cycles than the software version to dispatch a task. The Task Superscalar is a hybrid dataflow/von-Neumann architecture that exploits the task level parallelism of the program. The Task Superscalar combines the effectiveness of Out-of-Order processors together with the task abstraction, and thereby provides an unified management layer for CMPs which effectively employs processors as functional units. The Task Superscalar has been implemented in software with limited parallelism and high memory consumption due to the nature of the software implementation. In this thesis, a Hardware Task Superscalar architecture is designed to be integrated in a future High Performance Computer with the ability to exploit fine-grained task parallelism. The main contributions of this thesis are: (1) a design of the operational flow of Task Superscalar architecture adapted and improved for hardware implementation, (2) a HDL prototype for latency exploration, (3) a full cycle-accurate simulator of the Hardware Task Superscalar (based on the previously obtained latencies), (4) full design space exploration of the Task Superscalar component configuration (number and size) for systems with different number of processing elements (cores), (5) comparison with a software implementation of a real task-based programming model runtime using real benchmarks, and (6) hardware resource usage exploration of the selected configurations.Explotar la concurrencia para conseguir un mejor rendimiento es un reto importante y difícil para los sistemas de alto rendimiento. Aunque la teoría es sencilla, en muchos casos la complejidad de los modelos de programación paralela tradicionales impide al programador obtener un buen rendimiento. Se han propuesto diferentes granularidades de particionamiento de tareas para explotar mejor la concurrencia implícita en las aplicaciones. En este sentido, diferentes sistemas software de manejo dinámico de tareas utilizan los principios de ejecución "dataflow" para mejorar el paralelismo a nivel de tarea y superar el rendimiento de los sistemas de planificación estáticos. Estos modelos planfican la ejecución dinámicamente y utilizan tareas, en lugar de instrucciones, como unidad básica de trabajo. De esta forma descargan al programador de tener que realizar la sincronización de las tareas explícitamente en su programa. Aunque estos modelos de programación comparten muchas similitudes con los bien conocidos procesadores fuera de orden (como el análisis dinámico de dependencias y la ejecución en "dataflow"), dependen de un análisis dinámico software de las dependencias. Dicho análisis es inherentemente lento y limita la escalabilidad cuando hay un gran número de tareas pequeñas. Los problemas antes mencionados se incrementan exponencialmente con el número de núcleos disponibles. Para conseguir mantener todos los núcleos ocupados y conseguir acelerar el rendimiento global de la aplicación se hace necesario particionarla en muchas tareas pequeñas. La gestión de dichas tareas (es decir, su creación y distribución entre los núcleos) en software introduce sobrecostes, y por tanto resulta ineficiente conforme aumenta el número de núcleos. En contraposición, un sistema hardware de planificación de tareas puede conseguir mejores rendimientos ya que requiere una menor latencia en la gestión de las tareas. El Task Superscalar (TSS) es una arquitectura híbrida dataflow/von-Neumann que explota el paralelismo a nivel de tareas de los programas. El TSS combina la efectividad de los procesadores fuera de orden con la abstracción de tarea, y por tanto provee una capa unificada de gestión para los CMPs que gestiona los núcleos como unidades funcionales. Previo al trabajo de esta tesis el Task Superscalar se había implementado en software con un paralelismo limitado y mucho consumo de memoria debido a las limitaciones inherentes de una implementación software. En esta tesis se diseñado una implementación hardware de la arquitectura Task Superscalar con capacidad para manejar muchas tareas de pequeño tamaño que es integrable en un futuro computador de altas prestaciones. Así pues, las contribuciones principales de esta tesis son: (1) el diseño de un flujo operacional de la arquitectura Task Superscalar adaptado y mejorado para su implementación hardware; (2) un prototipo HDL de dicho flujo para la exploración de las latencias asociadas a la implementación hardware; (3) un simulador ciclo a ciclo del diseño hardware basado en los resultados obtenidos en la implementación hardware; (4) una exploración completa del espacio de diseño de los componentes hardware (número y cantidad de módulos, tamaños de las memorias, etc.) para diferentes tamaños de computadores (es decir, para diferentes cantidades de nucleos); (5) una comparación con la implementación software actual del mismo modelo de programación utilizando aplicaciones reales y; (6) una exploración de la utilización de recursos hardware de las diferentes configuraciones seleccionadas

    Coarse-Grain Parallel Programming in Jade

    No full text
    This paper presents Jade, a language which allows a programmer to easily express dynamic coarse-grain parallelism. Starting with a sequential program, a programmer augments those sections of code to be parallelized with abstract data usage information. The compiler and run-time system use this information to concurrently execute the program while respecting the program's data dependence constraints. Using Jade can significantly reduce the time and effort required to develop and maintain a parallel version of an imperative application with serial semantics. The paper introduces the basic principles of the language, compares Jade with other existing languages, and presents the performance of a sparse matrix Cholesky factorization algorithm implemented in Jade
    corecore