14 research outputs found

    Instruction fetch architectures and code layout optimizations

    Get PDF
    The design of higher performance processors has been following two major trends: increasing the pipeline depth to allow faster clock rates, and widening the pipeline to allow parallel execution of more instructions. Designing a higher performance processor implies balancing all the pipeline stages to ensure that overall performance is not dominated by any of them. This means that a faster execution engine also requires a faster fetch engine, to ensure that it is possible to read and decode enough instructions to keep the pipeline full and the functional units busy. This paper explores the challenges faced by the instruction fetch stage for a variety of processor designs, from early pipelined processors, to the more aggressive wide issue superscalars. We describe the different fetch engines proposed in the literature, the performance issues involved, and some of the proposed improvements. We also show how compiler techniques that optimize the layout of the code in memory can be used to improve the fetch performance of the different engines described Overall, we show how instruction fetch has evolved from fetching one instruction every few cycles, to fetching one instruction per cycle, to fetching a full basic block per cycle, to several basic blocks per cycle: the evolution of the mechanism surrounding the instruction cache, and the different compiler optimizations used to better employ these mechanisms.Peer ReviewedPostprint (published version

    Utilizing block size variability to enhance instruction fetch rate

    Get PDF
    In the past, instruction fetch speeds have been improved by using cache schemes that capture the actual program flow. In this paper, we elaborate on the architecture and operation of an instruction cache named Variable-Sized Block Cache (VSBC) that also makes use of the dynamic behavior of a program. Current trace-based cache schemes usually have some instructions stored repeatedly; this redundancy is eliminated in VSBC. Our cache also allows storage of basic blocks of arbitrary sizes, in multiple-way cache structure. An overall comparison of trace miss rate and average trace length shows VSBC to be a better performing cache scheme than TC, using SPECint2000 integer benchmarks.Facultad de Informátic

    Methods and systems for ordering instructions using future values

    Get PDF
    A method of ordering instructions. The method can include placing a first instruction that consumes a value of an object before a second instruction that produces the value of the object such that the first instruction is processed before the second instruction and a physical location is allocated to the value of the object upon processing the first instruction.https://digitalcommons.mtu.edu/patents/1022/thumbnail.jp

    POOR MAN’S TRACE CACHE: A VARIABLE DELAY SLOT ARCHITECTURE

    Get PDF
    We introduce a novel fetch architecture called Poor Man’s Trace Cache (PMTC). PMTC constructs taken-path instruction traces via instruction replication in static code and inserts them after unconditional direct and select conditional direct control transfer instructions. These traces extend to the end of the cache line. Since available space for trace insertion may vary by the position of the control transfer instruction within the line, we refer to these fetch slots as variable delay slots. This approach ensures traces are fetched along with the control transfer instruction that initiated the trace. Branch, jump and return instruction semantics as well as the fetch unit are modified to utilize traces in delay slots. PMTC yields the following benefits: 1. Average fetch bandwidth increases as the front end can fetch across taken control transfer instructions in a single cycle. 2. The dynamic number of instruction cache lines fetched by the processor is reduced as multiple non contiguous basic blocks along a given path are encountered in one fetch cycle. 3. Replication of a branch instruction along multiple paths provides path separability for branches, which positively impacts branch prediction accuracy. PMTC mechanism requires minimal modifications to the processor’s fetch unit and the trace insertion algorithm can easily be implemented within the assembler without compiler support

    Improving Instruction Fetch Rate with Code Pattern Cache for Superscalar Architecture

    Get PDF
    In the past, instruction fetch speeds have been improved by using cache schemes that capture the actual program flow. In this proposal, we present the architecture of a new instruction cache named code pattern cache (CPC); the cache is used with superscalar processors. CPC?s operation is based on the fundamental principles that: common programs tend to repeat their execution patterns; and efficient storage of a program flow can enhance the performance of an instruction fetch mechanism. CPC saves basic blocks (sets of instructions separated by control instructions) and their boundary addresses while the code is running. Basic blocks and their addresses are stored in two separate structures, called block pointer cache (BPC) and basic block cache (BBC), respectively. Later, if the same basic block sequence is expected to execute, it is fetched from CPC, instead of the instruction cache; this mechanism results in higher likelihood of delivering a larger number of instructions in every clock cycle. We developed single and multi-threaded simulators for TC, BC, and CPC, and used them with 10 SPECint2000 benchmarks. The simulation results demonstrated CPC?s advantage over TC and BC, in terms of trace miss rate and average trace length. Additionally, we used cache models to quantify the timing, area, and power for the three cache schemes. Using an aggregate performance index that combined the simulation and modeling results, CPC was shown to perform better than both TC and BC. During our research, each of the TC-, BC-, or CPC- configurations took 4-6 hours to simulate, so performance comparison of these caches proved to be a very time-consuming process. Neural network models (NNM?s) can be time-efficient alternatives to simulations, so we studied their feasibility to represent the cache behavior. We developed two NNM\u27s, one to predict the trace miss rate and the other to predict the average trace length for the three caches. The NNM\u27s modeled the caches with reasonable accuracy, and produced results in a fraction of a second

    A Survey on Thread-Level Speculation Techniques

    Get PDF
    Producción CientíficaThread-Level Speculation (TLS) is a promising technique that allows the parallel execution of sequential code without relying on a prior, compile-time-dependence analysis. In this work, we introduce the technique, present a taxonomy of TLS solutions, and summarize and put into perspective the most relevant advances in this field.MICINN (Spain) and ERDF program of the European Union: HomProg-HetSys project (TIN2014-58876-P), CAPAP-H5 network (TIN2014-53522-REDT), and COST Program Action IC1305: Network for Sustainable Ultrascale Computing (NESUS)

    The Pattern Cache: A Mechanism to Reduce Power Consumption in Out of Order Processors by Removing Duplicated Efforts

    Get PDF
    Out-of-order engines are the basis for nearly every high performance general purpose processor today due to their ability to mask the penalties associated with long latency operations. Unfortunately, these benefits come at a cost of a substantial amount of power consuming hardware. In addition, the prevalence of loops in code means that this hardware often duplicates its efforts, rescheduling the same sequence thousands of times within a typical program. While each iteration of the loop is not identical due to branches and variable length operations, for the SPEC2017 benchmarks tested, the most common dynamically scheduled instruction pattern takes up anywhere from 43% to 88% of the reorderings, and the four most common patterns accounting for anywhere from 70% to 98%. To eliminate some of the duplicated work in finding the same dynamic schedule, the execution patterns that the out-of-order engine creates can be recorded in a cache that indexes patterns based on the branch that immediately precedes that pattern. If the same pattern is seen enough times, then much of the dynamic scheduling hardware can be power off and the previously determined schedule can be used. This powered off hardware includes the common data bus that broadcasts output registers written in a particular cycle to every reservations station. Instead, the system can replay instructions based on the recorded order. There are many parameters within this system that affect both the amount of time spent in this replay mode and the relative performance of the system. These include the start threshold, stop threshold, length of time to wait on a replayed instruction to be ready, the mechanism for handling squashes, and timeouts, the number of patterns to store for a branch, and the length of history considered. While the general trade-offs of adjusting these parameters is often the same for most benchmarks, the optimum value depends both on the value of the other parameters and the particular set of code being used. With this in mind, the proposed system has been shown to be able to achieve over 40% of the time spent in replay mode with only a 2% reduction in performance relative to a standard out of order processor. While this system has been shown to work well, the goal of reducing power by removing the duplicated efforts means that the pattern cache size must be limited. These limitations include limits to both the length of patterns as well as the number of patterns that can be stored. Limiting the length of pattern to reasonable sizes has almost no affect on the system operation in most cases, but limiting the number of patterns does. For some benchmarks it is still possible to get most of the performance with a pattern cache size on the order of kilobytes (30% utilization with a 1% performance drop in the best case), other benchmarks only have utilization rates at half of their maximum possible values. With this in mind, the proposed system shows promising initial results, but for this system to be an effective power saving tool more work must be done to find ways to limit the size of the pattern cache with smaller reductions in utilization rates

    Software and hardware methods for memory access latency reduction on ILP processors

    Get PDF
    While microprocessors have doubled their speed every 18 months, performance improvement of memory systems has continued to lag behind. to address the speed gap between CPU and memory, a standard multi-level caching organization has been built for fast data accesses before the data have to be accessed in DRAM core. The existence of these caches in a computer system, such as L1, L2, L3, and DRAM row buffers, does not mean that data locality will be automatically exploited. The effective use of the memory hierarchy mainly depends on how data are allocated and how memory accesses are scheduled. In this dissertation, we propose several novel software and hardware techniques to effectively exploit the data locality and to significantly reduce memory access latency.;We first presented a case study at the application level that reconstructs memory-intensive programs by utilizing program-specific knowledge. The problem of bit-reversals, a set of data reordering operations extensively used in scientific computing program such as FFT, and an application with a special data access pattern that can cause severe cache conflicts, is identified in this study. We have proposed several software methods, including padding and blocking, to restructure the program to reduce those conflicts. Our methods outperform existing ones on both uniprocessor and multiprocessor systems.;The access latency to DRAM core has become increasingly long relative to CPU speed, causing memory accesses to be an execution bottleneck. In order to reduce the frequency of DRAM core accesses to effectively shorten the overall memory access latency, we have conducted three studies at this level of memory hierarchy. First, motivated by our evaluation of DRAM row buffer\u27s performance roles and our findings of the reasons of its access conflicts, we propose a simple and effective memory interleaving scheme to reduce or even eliminate row buffer conflicts. Second, we propose a fine-grain priority scheduling scheme to reorder the sequence of data accesses on multi-channel memory systems, effectively exploiting the available bus bandwidth and access concurrency. In the final part of the dissertation, we first evaluate the design of cached DRAM and its organization alternatives associated with ILP processors. We then propose a new memory hierarchy integration that uses cached DRAM to construct a very large off-chip cache. We show that this structure outperforms a standard memory system with an off-level L3 cache for memory-intensive applications.;Memory access latency has become a major performance bottleneck for memory-intensive applications. as long as DRAM technology remains its most cost-effective position for making main memory, the memory performance problem will continue to exist. The studies conducted in this dissertation attempt to address this important issue. Our proposed software and hardware schemes are effective and applicable, which can be directly used in real-world memory system designs and implementations. Our studies also provide guidance for application programmers to understand memory performance implications, and for system architects to optimize memory hierarchies

    Microarchitectural Techniques to Exploit Repetitive Computations and Values

    Get PDF
    La dependencia de datos es una de las principales razones que limitan el rendimiento de los procesadores actuales. Algunos estudios han demostrado, que las aplicaciones no pueden alcanzar más de una decena de instrucciones por ciclo en un procesador ideal, con la simple limitación de las dependencias de datos. Esto sugiere que, desarrollar técnicas que eviten la serialización causada por ellas, son importantes para acelerar el paralelismo a nivel de instrucción y será crucial en los microprocesadores del futuro.Además, la innovación y las mejoras tecnológicas en el diseño de los procesadores de los últimos diez años han sobrepasado los avances en el diseño del sistema de memoria. Por lo tanto, la cada vez mas grande diferencia de velocidades de procesador y memoria, ha motivado que, los actuales procesadores de alto rendimiento se centren en las organizaciones cache para tolerar las altas latencias de memoria. Las memorias cache solventan en parte esta diferencia de velocidades, pero a cambio introducen un aumento de área del procesador, un incremento del consumo energético y una mayor demanda de ancho de banda de memoria, de manera que pueden llegar a limitar el rendimiento del procesador.En esta tesis se proponen diversas técnicas microarquitectónicas que pueden aplicarse en diversas partes del procesador, tanto para mejorar el sistema de memoria, como para acelerar la ejecución de instrucciones. Algunas de ellas intentan suavizar la diferencia de velocidades entre el procesador y el sistema de memoria, mientras que otras intentan aliviar la serialización causada por las dependencias de datos. La idea fundamental, tras todas las técnicas propuestas, consiste en aprovechar el alto porcentaje de repetición de los programas convencionales.Las instrucciones ejecutadas por los programas de hoy en día, tienden a ser repetitivas, en el sentido que, muchos de los datos consumidos y producidos por ellas son frecuentemente los mismos. Esta tesis denomina la repetición de cualquier valor fuente y destino como Repetición de Valores, mientras que la repetición de valores fuente y operación de la instrucción se distingue como Repetición de Computaciones. De manera particular, las técnicas propuestas para mejorar el sistema de memoria se basan en explotar la repetición de valores producida por las instrucciones de almacenamiento, mientras que las técnicas propuestas para acelerar la ejecución de instrucciones, aprovechan la repetición de computaciones producida por todas las instrucciones.Data dependences are some of the most important hurdles that limit the performance of current microprocessors. Some studies have shown that some applications cannot achieve more than a few tens of instructions per cycle in an ideal processor with the sole limitation of data dependences. This suggests that techniques for avoiding the serialization caused by them are important for boosting the instruction-level parallelism and will be crucial for future microprocessors. Moreover, innovation and technological improvements in processor design have outpaced advances in memory design in the last ten years. Therefore, the increasing gap between processor and memory speeds has motivated that current high performance processors focus on cache memory organizations to tolerate growing memory latencies. Caches attempt to bridge this gap but do so at the expense of large amounts of die area, increment of the energy consumption and higher demand of memory bandwidth that can be progressively a greater limit to high performance.We propose several microarchitectural techniques that can be applied to various parts of current microprocessor designs to improve the memory system and to boost the execution of instructions. Some techniques attempt to ease the gap between processor and memory speeds, while the others attempt to alleviate the serialization caused by data dependences. The underlying aim behind all the proposed microarchitectural techniques is to exploit the repetitive behaviour in conventional programs. Instructions executed by real-world programs tend to be repetitious, in the sense that most of the data consumed and produced by several dynamic instructions are often the same. We refer to the repetition of any source or result value as Value Repetition and the repetition of source values and operation as Computation Repetition. In particular, the techniques proposed for improving the memory system are based on exploiting the value repetition produced by store instructions, while the techniques proposed for boosting the execution of instructions are based on exploiting the computation repetition produced by all the instructions
    corecore