2 research outputs found

    Running stream-like programs on heterogeneous multi-core systems

    Get PDF
    All major semiconductor companies are now shipping multi-cores. Phones, PCs, laptops, and mobile internet devices will all require software that can make effective use of these cores. Writing high-performance parallel software is difficult, time-consuming and error prone, increasing both time-to-market and cost. Software outlives hardware; it typically takes longer to develop new software than hardware, and legacy software tends to survive for a long time, during which the number of cores per system will increase. Development and maintenance productivity will be improved if parallelism and technical details are managed by the machine, while the programmer reasons about the application as a whole. Parallel software should be written using domain-specific high-level languages or extensions. These languages reveal implicit parallelism, which would be obscured by a sequential language such as C. When memory allocation and program control are managed by the compiler, the program's structure and data layout can be safely and reliably modified by high-level compiler transformations. One important application domain contains so-called stream programs, which are structured as independent kernels interacting only through one-way channels, called streams. Stream programming is not applicable to all programs, but it arises naturally in audio and video encode and decode, 3D graphics, and digital signal processing. This representation enables high-level transformations, including kernel unrolling and kernel fusion. This thesis develops new compiler and run-time techniques for stream programming. The first part of the thesis is concerned with a statically scheduled stream compiler. It introduces a new static partitioning algorithm, which determines which kernels should be fused, in order to balance the loads on the processors and interconnects. A good partitioning algorithm is crucial if the compiler is to produce efficient code. The algorithm also takes account of downstream compiler passes---specifically software pipelining and buffer allocation---and it models the compiler's ability to fuse kernels. The latter is important because the compiler may not be able to fuse arbitrary collections of kernels. This thesis also introduces a static queue sizing algorithm. This algorithm is important when memory is distributed, especially when local stores are small. The algorithm takes account of latencies and variations in computation time, and is constrained by the sizes of the local memories. The second part of this thesis is concerned with dynamic scheduling of stream programs. First, it investigates the performance of known online, non-preemptive, non-clairvoyant dynamic schedulers. Second, it proposes two dynamic schedulers for stream programs. The first is specifically for one-dimensional stream programs. The second is more general: it does not need to be told the stream graph, but it has slightly larger overhead. This thesis also introduces some support tools related to stream programming. StarssCheck is a debugging tool, based on Valgrind, for the StarSs task-parallel programming language. It generates a warning whenever the program's behaviour contradicts a pragma annotation. Such behaviour could otherwise lead to exceptions or race conditions. StreamIt to OmpSs is a tool to convert a streaming program in the StreamIt language into a dynamically scheduled task based program using StarSs.Totes les empreses de semiconductors produeixen actualment multi-cores. M貌bils,PCs, port脿tils, i dispositius m貌bils d鈥橧nternet necessitaran programari quefaci servir eficientment aquests cores. Escriure programari paral路lel d鈥檃ltrendiment 茅s dif铆cil, labori贸s i propens a errors, incrementant tant el tempsde llan莽ament al mercat com el cost. El programari t茅 una vida m茅s llarga queel maquinari; t铆picament pren m茅s temps desenvolupar nou programi que noumaquinari, i el programari ja existent pot perdurar molt temps, durant el qualel nombre de cores dels sistemes incrementar脿. La productivitat dedesenvolupament i manteniment millorar脿 si el paral路lelisme i els detallst猫cnics s贸n gestionats per la m脿quina, mentre el programador raona sobre elconjunt de l鈥檃plicaci贸.El programari paral路lel hauria de ser escrit en llenguatges espec铆fics deldomini. Aquests llenguatges extrauen paral路lelisme impl铆cit, el qual 茅s ocultatper un llenguatge seq眉encial com C. Quan l鈥檃ssignaci贸 de mem貌ria i lesestructures de control s贸n gestionades pel compilador, l鈥檈structura iorganitzaci贸 de dades del programi poden ser modificades de manera segura ifiable per les transformacions d鈥檃lt nivell del compilador.Un dels dominis de l鈥檃plicaci贸 importants 茅s el que consta dels programes destream; aquest programes s贸n estructurats com a nuclis independents queinteractuen nom茅s a trav茅s de canals d鈥檜n sol sentit, anomenats streams. Laprogramaci贸 de streams no 茅s aplicable a tots els programes, per貌 sorgeix deforma natural en la codificaci贸 i descodificaci贸 d鈥櫭爑dio i v铆deo, gr脿fics 3D, iprocessament de senyals digitals. Aquesta representaci贸 permet transformacionsd鈥檃lt nivell, fins i tot descomposici贸 i fusi贸 de nucli.Aquesta tesi desenvolupa noves t猫cniques de compilaci贸 i sistemes en tempsd鈥檈xecuci贸 per a programaci贸 de streams. La primera part d鈥檃questa tesi esfocalitza amb un compilador de streams de planificaci贸 est脿tica. Presenta unnou algorisme de partici贸 est脿tica, que determina quins nuclis han de serfusionats, per tal d鈥檈quilibrar la c脿rrega en els processadors i en lesinterconnexions. Un bon algorisme de particionat 茅s fonamental per tal de queel compilador produeixi codi eficient. L鈥檃lgorisme tamb茅 t茅 en compte elspassos de compilaci贸 subseq眉ents---espec铆ficament software pipelining il鈥檃rranjament de buffers---i modela la capacitat del compilador per fusionarnuclis. Aquesta tesi tamb茅 presenta un algorisme est脿tic de redimensionament de cues.Aquest algorisme 茅s important quan la mem貌ria 茅s distribu茂da, especialment quanles mem貌ries locals s贸n petites. L鈥檃lgorisme t茅 en compte lat猫ncies ivariacions en els temps de c脿lcul, i considera el l铆mit imposat per la mida deles mem貌ries locals.La segona part d鈥檃questa tesi es centralitza en la planificaci贸 din脿mica deprogrames de streams. En primer lloc, investiga el rendiment dels planificadorsdin脿mics online, non-preemptive i non-clairvoyant. En segon lloc, proposa dosplanificadors din脿mics per programes de stream. El primer 茅s espec铆ficament pera programes de streams unidimensionals. El segon 茅s m茅s general: no necessitael graf de streams, per貌 els overheads s贸n una mica m茅s grans.Aquesta tesi tamb茅 presenta un conjunt d鈥檈ines de suport relacionades amb laprogramaci贸 de streams. StarssCheck 茅s una eina de depuraci贸, que 茅s basa enValgrind, per StarSs, un llenguatge de programaci贸 paral路lela basat en tasques.Aquesta eina genera un av铆s cada vegada que el comportament del programa est脿en contradicci贸 amb una anotaci贸 pragma. Aquest comportament d鈥檜na altra manerapodria causar excepcions o situacions de competici贸. StreamIt to OmpSs 茅s unaeina per convertir un programa de streams codificat en el llenguatge StreamIt aun programa de tasques en StarSs planificat de forma din脿mica.Postprint (published version

    Hardware design of task superscalar architecture

    Get PDF
    Exploiting concurrency to achieve greater performance is a difficult and important challenge for current high performance systems. Although the theory is plain, the complexity of traditional parallel programming models in most cases impedes the programmer to harvest performance. Several partitioning granularities have been proposed to better exploit concurrency at task granularity. In this sense, different dynamic software task management systems, such as task-based dataflow programming models, benefit dataflow principles to improve task-level parallelism and overcome the limitations of static task management systems. These models implicitly schedule computation and data and use tasks instead of instructions as a basic work unit, thereby relieving the programmer of explicitly managing parallelism. While these programming models share conceptual similarities with the well-known Out-of-Order superscalar pipelines (e.g., dynamic data dependency analysis and dataflow scheduling), they rely on software-based dependency analysis, which is inherently slow, and limits their scalability when there is fine-grained task granularity and a large amount of tasks. The aforementioned problem increases with the number of available cores. In order to keep all the cores busy and accelerate the overall application performance, it becomes necessary to partition it into more and smaller tasks. The task scheduling (i.e., creation and management of the execution of tasks) in software introduces overheads, and so becomes increasingly inefficient with the number of cores. In contrast, a hardware scheduling solution can achieve greater speed-ups as a hardware task scheduler requires fewer cycles than the software version to dispatch a task. The Task Superscalar is a hybrid dataflow/von-Neumann architecture that exploits the task level parallelism of the program. The Task Superscalar combines the effectiveness of Out-of-Order processors together with the task abstraction, and thereby provides an unified management layer for CMPs which effectively employs processors as functional units. The Task Superscalar has been implemented in software with limited parallelism and high memory consumption due to the nature of the software implementation. In this thesis, a Hardware Task Superscalar architecture is designed to be integrated in a future High Performance Computer with the ability to exploit fine-grained task parallelism. The main contributions of this thesis are: (1) a design of the operational flow of Task Superscalar architecture adapted and improved for hardware implementation, (2) a HDL prototype for latency exploration, (3) a full cycle-accurate simulator of the Hardware Task Superscalar (based on the previously obtained latencies), (4) full design space exploration of the Task Superscalar component configuration (number and size) for systems with different number of processing elements (cores), (5) comparison with a software implementation of a real task-based programming model runtime using real benchmarks, and (6) hardware resource usage exploration of the selected configurations.Explotar la concurrencia para conseguir un mejor rendimiento es un reto importante y dif铆cil para los sistemas de alto rendimiento. Aunque la teor铆a es sencilla, en muchos casos la complejidad de los modelos de programaci贸n paralela tradicionales impide al programador obtener un buen rendimiento. Se han propuesto diferentes granularidades de particionamiento de tareas para explotar mejor la concurrencia impl铆cita en las aplicaciones. En este sentido, diferentes sistemas software de manejo din谩mico de tareas utilizan los principios de ejecuci贸n "dataflow" para mejorar el paralelismo a nivel de tarea y superar el rendimiento de los sistemas de planificaci贸n est谩ticos. Estos modelos planfican la ejecuci贸n din谩micamente y utilizan tareas, en lugar de instrucciones, como unidad b谩sica de trabajo. De esta forma descargan al programador de tener que realizar la sincronizaci贸n de las tareas expl铆citamente en su programa. Aunque estos modelos de programaci贸n comparten muchas similitudes con los bien conocidos procesadores fuera de orden (como el an谩lisis din谩mico de dependencias y la ejecuci贸n en "dataflow"), dependen de un an谩lisis din谩mico software de las dependencias. Dicho an谩lisis es inherentemente lento y limita la escalabilidad cuando hay un gran n煤mero de tareas peque帽as. Los problemas antes mencionados se incrementan exponencialmente con el n煤mero de n煤cleos disponibles. Para conseguir mantener todos los n煤cleos ocupados y conseguir acelerar el rendimiento global de la aplicaci贸n se hace necesario particionarla en muchas tareas peque帽as. La gesti贸n de dichas tareas (es decir, su creaci贸n y distribuci贸n entre los n煤cleos) en software introduce sobrecostes, y por tanto resulta ineficiente conforme aumenta el n煤mero de n煤cleos. En contraposici贸n, un sistema hardware de planificaci贸n de tareas puede conseguir mejores rendimientos ya que requiere una menor latencia en la gesti贸n de las tareas. El Task Superscalar (TSS) es una arquitectura h铆brida dataflow/von-Neumann que explota el paralelismo a nivel de tareas de los programas. El TSS combina la efectividad de los procesadores fuera de orden con la abstracci贸n de tarea, y por tanto provee una capa unificada de gesti贸n para los CMPs que gestiona los n煤cleos como unidades funcionales. Previo al trabajo de esta tesis el Task Superscalar se hab铆a implementado en software con un paralelismo limitado y mucho consumo de memoria debido a las limitaciones inherentes de una implementaci贸n software. En esta tesis se dise帽ado una implementaci贸n hardware de la arquitectura Task Superscalar con capacidad para manejar muchas tareas de peque帽o tama帽o que es integrable en un futuro computador de altas prestaciones. As铆 pues, las contribuciones principales de esta tesis son: (1) el dise帽o de un flujo operacional de la arquitectura Task Superscalar adaptado y mejorado para su implementaci贸n hardware; (2) un prototipo HDL de dicho flujo para la exploraci贸n de las latencias asociadas a la implementaci贸n hardware; (3) un simulador ciclo a ciclo del dise帽o hardware basado en los resultados obtenidos en la implementaci贸n hardware; (4) una exploraci贸n completa del espacio de dise帽o de los componentes hardware (n煤mero y cantidad de m贸dulos, tama帽os de las memorias, etc.) para diferentes tama帽os de computadores (es decir, para diferentes cantidades de nucleos); (5) una comparaci贸n con la implementaci贸n software actual del mismo modelo de programaci贸n utilizando aplicaciones reales y; (6) una exploraci贸n de la utilizaci贸n de recursos hardware de las diferentes configuraciones seleccionadas
    corecore