89 research outputs found

    Clustered VLIW architecture based on queue register files

    Get PDF
    Institute for Computing Systems ArchitectureInstruction-level parallelism (ILP) is a set of hardware and software techniques that allow parallel execution of machine operations. Superscalar architectures rely most heavily upon hardware schemes to identify parallelism among operations. Although successful in terms of performance, the hardware complexity involved might limit the scalability of this model. VLIW architectures use a different approach to exploit ILP. In this case all data dependence analyses and scheduling of operations are performed at compile time, resulting in a simpler hardware organization. This allows the inclusion of a larger number of functional units (FUs) into a single chip. IN spite of this relative simplification, the scalability of VLIW architectures can be constrained by the size and number of ports of the register file. VLIW machines often use software pipelining techniques to improve the execution of loop structures, which can increase the register pressure. Furthermore, the access time of a register file can be compromised by the number of ports, causing a negative impact on the machine cycle time. For these reasons we understand that the benefits of having parallel FUs, which have motivated the investigation of alternative machine designs. This thesis presents a scalar VLIW architecture comprising clusters of FUs and private register files. Register files organised as queue structures are used as a mechanism for inter-cluster communication, allowing the enforcement of fixed latency in the process. This scheme presents better possibilities in terms of scalability as the size of the individual register files is not determined by the total number of FUs, suggesting that the silicon area may grow only linearly with respect to the total number of FUs. However, the effectiveness of such an organization depends on the efficiency of the code partitioning strategy. We have developed an algorithm for a clustered VLIW architecture integrating both software pipelining and code partitioning in a a single procedure. Experimental results show it may allow performance levels close to an unclustered machine without communication restraints. Finally, we have developed silicon area and cycle time models to quantify the scalability of performance and cost for this class of architecture

    Distributed modulo scheduling

    Get PDF
    Wide-issue ILP machines can be built using the VLIW approach as many of the hardware complexities found in superscalar processors can be transferred to the compiler. However, the scalability of VLIW architectures is still constrained by the size and number of ports of the register file required by a large number of functional units. Organizations composed by clusters of a few functional units and small private register files have been proposed to deal with this problem, an approach highly dependent on scheduling and partitioning strategies. This paper presents DMS, an algorithm that integrates modulo scheduling and code partitioning in a single procedure. Experimental results have shown the algorithm is effective for configurations up to 8 clusters, or even more when targeting vectorizable loops. 1 Keywords: ILP, VLIW, Clustering, Software Pipelining 1. Introduction Current microprocessor technology relies on two basic approaches to improve performance. One is to increase clock rates..

    Partitioned schedules for clustered VLIW architectures

    Get PDF

    TaskGenX: A Hardware-Software Proposal for Accelerating Task Parallelism

    Get PDF
    As chip multi-processors (CMPs) are becoming more and more complex, software solutions such as parallel programming models are attracting a lot of attention. Task-based parallel programming models offer an appealing approach to utilize complex CMPs. However, the increasing number of cores on modern CMPs is pushing research towards the use of fine grained parallelism. Task-based programming models need to be able to handle such workloads and offer performance and scalability. Using specialized hardware for boosting performance of task-based programming models is a common practice in the research community. Our paper makes the observation that task creation becomes a bottleneck when we execute fine grained parallel applications with many task-based programming models. As the number of cores increases the time spent generating the tasks of the application is becoming more critical to the entire execution. To overcome this issue, we propose TaskGenX. TaskGenX offers a solution for minimizing task creation overheads and relies both on the runtime system and a dedicated hardware. On the runtime system side, TaskGenX decouples the task creation from the other runtime activities. It then transfers this part of the runtime to a specialized hardware. We draw the requirements for this hardware in order to boost execution of highly parallel applications. From our evaluation using 11 parallel workloads on both symmetric and asymmetric multicore systems, we obtain performance improvements up to 15×, averaging to 3.1× over the baseline.This work has been supported by the RoMoL ERC Advanced Grant (GA 321253), by the European HiPEAC Network of Excellence, by the Spanish Ministry of Science and Innovation (contracts TIN2015-65316-P), by the Generalitat de Catalunya (contracts 2014-SGR-1051 and 2014-SGR-1272), and by the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 671697 and No. 779877. M. Moretó has been partially supported by the Ministry of Economy and Competitiveness under Ramon y Cajal fellowship number RYC-2016-21104. Finally, the authors would like to thank Thomas Grass for his valuable help with the simulator.Peer ReviewedPostprint (author's final draft

    Simultaneous multithreading: Operating system perspective

    Get PDF
    Developing CPU architecture is a very complicated, iterative process that requires significant time and money investments. The motivation for this work is to find ways to decreases the amount of time and money needed for the development of hardware architectures. The main problem is that it is very difficult to determine the performance of the architecture, since it is impossible to take any performance measurements untill upon completion of the development process. Consecutively, it is impossible to improve the performance of the product or to predict the influence of different parts of the architecture on the architecture\u27s overall performance. Another problem is that this type of development does not allow for the developed system to be reconfigured or altered without complete re-development. . The solution to the problems mentioned above is the software simulators that allow researching the architecture before even starting to cut the silicon.. Simultaneous multithreading (SMT) is a modern approach to CPU design. This technique increases the system throughput by decreasing both total instruction delay and stall times of the CPU. The gain in performance of a typical SMT processor is achieved by allowing the instructions from several threads to be fetched by an operating system into the CPU simultaneously. In order to function successfully the CPU needs software support. In modern computer systems the influence of an operating system on overall system performance can no longer be ignored. It is important to understand that the union of the CPU and the supporting operating system and their interdependency determines the overall performance of any computer system. In the system that has been implemented on hardware level such analysis is impossible, since the hardware system is neither flexible nor configurable. However, in the SMT architecture, the system is capable of performing some useful work even if a task has generated an error. A wide range of simulators is described in the literature, and a lot of them are publicly accessible. The main goal of this work is to modify an existing SEVIOS/Topsy simulator to achieve a simple, configurable, publicly accessible SMT SEVIOS/Topsy simulator that must also include an SMT Topsy.. The simulator should demonstrate the fetching process of the SMT MIPS, as well as scheduling aspects of the CPU and the operating system integrated environment.. This work covers a broad range of aspects, among which are: 1) Completion of SMT MIPS and SMT Topsy specifications; 2) Integration of MXS into SIMOS/Topsy; 3) Modifications to the fetching unit of MXS that allow to support SMT; 4) Addition of SMT support to Topsy;; This work uses Topsy/R4000 simulator developed at Swiss Federal Institute of Technology, and the MXS (R10000) part of the SimOS simulator developed at Stanford University. Development process utilizes C high-level language, Intel and MIPS assembly languages. The result of this work is a development of a complete computer system software simulator. The simulator allows taking performance measurements and reconfiguration of SMT Topsy and the fetching unit of the SMT MXS. The simulator is modular: that is any of its parts can be substituted with other parts that perform similar functionality. It also means that the whole simulator can be integrated into a larger scale simulation project. The development of this simulator significantly decreases the amount of time and money needed for the development of hardware architectures and provides new ways in researching the influence of an operating system on the performance of the computer system as a whole
    • …
    corecore