285,539 research outputs found

    Enlarging instruction streams

    Get PDF
    The stream fetch engine is a high-performance fetch architecture based on the concept of an instruction stream. We call a sequence of instructions from the target of a taken branch to the next taken branch, potentially containing multiple basic blocks, a stream. The long length of instruction streams makes it possible for the stream fetch engine to provide a high fetch bandwidth and to hide the branch predictor access latency, leading to performance results close to a trace cache at a lower implementation cost and complexity. Therefore, enlarging instruction streams is an excellent way to improve the stream fetch engine. In this paper, we present several hardware and software mechanisms focused on enlarging those streams that finalize at particular branch types. However, our results point out that focusing on particular branch types is not a good strategy due to Amdahl's law. Consequently, we propose the multiple-stream predictor, a novel mechanism that deals with all branch types by combining single streams into long virtual streams. This proposal tolerates the prediction table access latency without requiring the complexity caused by additional hardware mechanisms like prediction overriding. Moreover, it provides high-performance results which are comparable to state-of-the-art fetch architectures but with a simpler design that consumes less energy.Peer ReviewedPostprint (published version

    Predicting multiple streams per cycle

    Get PDF
    The next stream predictor is an accurate branch predictor that provides stream level sequencing. Every stream prediction contains a full stream of instructions, that is, a sequence of instructions from the target of a taken branch to the next taken branch, potentially containing multiple basic blocks. The long size of instruction streams makes it possible for the stream predictor to provide high fetch bandwidth and to tolerate the prediction table access latency. Therefore, an excellent way for improving the behavior of the next stream predictor is to enlarge instruction streams. In this paper, we provide a comprehensive analysis of dynamic instruction streams, showing that focusing on particular kinds of stream is not a good strategy due to Amdahl's law. Consequently, we propose the multiple stream predictor, a novel mechanism that deals with all kinds of streams by combining single streams into long virtual streams. We show that our multiple stream predictor is able to tolerate the prediction table access latency without requiring the complexity caused by additional hardware mechanisms like prediction overriding, also reducing the overall branch predictor energy consumption.Postprint (published version

    Techniques for enlarging instruction streams

    Get PDF
    This work presents several techniques for enlarging instruction streams. We call stream to a sequence of instructions from the target of a taken branch to the next taken branch, potentially containing multiple basic blocks. The long size of instruction streams makes it possible for a fetch engine based on streams to provide high fetch bandwidth, which leads to obtaining performance results comparable to a trace cache. The long size of streams also enables the next stream predictor to tolerate the prediction table access latency. Therefore, enlarging instruction streams will improve the behavior of a fetch engine based on streams. We provide a comprehensive analysis of dynamic instruction streams, showing that focusing on particular kinds of stream is not a good strategy due to Amdahl's law. Consequently, we propose the multiple stream predictor, a novel mechanism that deals with all kinds of streams by combining single streams into long virtual streams. We show that our multiple stream predictor is able to tolerate the prediction access latency without requiring the complexity caused by additional hardware mechanisms like prediction overriding.Postprint (published version

    Reducing fetch architecture complexity using procedure inlining

    Get PDF
    Fetch engine performance is seriously limited by the branch prediction table access latency. This fact has lead to the development of hardware mechanisms, like prediction overriding, aimed to tolerate this latency. However, prediction overriding requires additional support and recovery mechanisms, which increases the fetch architecture complexity. In this paper, we show that this increase in complexity can be avoided if the interaction between the fetch architecture and software code optimizations is taken into account. We use aggressive procedure inlining to generate long streams of instructions that are used by the fetch engine as the basic prediction unit. We call instruction stream to a sequence of instructions from the target of a taken branch to the next taken branch. These instruction streams are long enough to feed the execution engine with instructions during multiple cycles, while a new stream prediction is being generated, and thus hiding the prediction table access latency. Our results show that the length of instruction streams compensates the increase in the instruction cache miss rate caused by inlining. We show that, using procedure inlining, the need for a prediction overriding mechanism is avoided, reducing the fetch engine complexity.Peer ReviewedPostprint (published version

    Improving Prediction Models for Mass Assessment: A Data Stream Approach

    Get PDF
    Mass appraisal is the process of valuing a large collection of properties within a city/municipality usually for tax purposes. The common methodology for mass appraisal is based on multiple regression though this methodology has been found to be deficient. Data mining methods have been proposed and tested as an alternative but the results are very mixed. This study introduces a new approach to building prediction models for assessing residential property values by treating past sales transactions as a data stream. The study used 110,525 sales transaction records from a municipality in the Midwest of the US. Our results show that a data stream based approach outperforms the traditional regression approach, thus showing its potential in improving the performance of prediction models for mass assessment

    Region-Based Template Matching Prediction for Intra Coding

    Get PDF
    Copy prediction is a renowned category of prediction techniques in video coding where the current block is predicted by copying the samples from a similar block that is present somewhere in the already decoded stream of samples. Motion-compensated prediction, intra block copy, template matching prediction etc. are examples. While the displacement information of the similar block is transmitted to the decoder in the bit-stream in the first two approaches, it is derived at the decoder in the last one by repeating the same search algorithm which was carried out at the encoder. Region-based template matching is a recently developed prediction algorithm that is an advanced form of standard template matching. In this method, the reference area is partitioned into multiple regions and the region to be searched for the similar block(s) is conveyed to the decoder in the bit-stream. Further, its final prediction signal is a linear combination of already decoded similar blocks from the given region. It was demonstrated in previous publications that region-based template matching is capable of achieving coding efficiency improvements for intra as well as inter-picture coding with considerably less decoder complexity than conventional template matching. In this paper, a theoretical justification for region-based template matching prediction subject to experimental data is presented. Additionally, the test results of the aforementioned method on the latest H.266/Versatile Video Coding (VVC) test model (version VTM-14.0) yield an average Bjøntegaard-Delta (BD) bit-rate savings of −0.75% using all intra (AI) configuration with 130% encoder run-time and 104% decoder run-time for a particular parameter selection

    The cumulative impact of tidal stream turbine arrays on sediment transport in the Pentland Firth

    Get PDF
    This contribution investigates the impact of the deployment of tidal stream turbine arrays on sediment dynamics and seabed morphology in the Pentland Firth, Scotland. The Pentland Firth is arguably the premier tidal stream site in the world and engineering developments are progressing rapidly. Therefore understanding and minimising impacts is vital to ensure the successful development of this nascent industry. Here a 3 dimensional coupled hydrodynamic and sediment transport numerical model is used to investigate the impact on sediment transport and morphodynamics of tidal stream arrays. The aim of the work presented here is twofold: firstly to provide prediction of the changes caused by multiple tidal stream turbine array developments to some of the unique sandy seabed environments in the Pentland Firth and secondly as a case study to determine the relationship between impacts of individual tidal stream farms and cumulative impacts of multiple farms. Due to connectivity in tidal flow it has been hypothesized that the cumulative impact of multiple arrays on sediment dynamics might be non-linear. This work suggests that, for the Pentland Firth, this is not the case: the cumulative impact of the 4 currently proposed arrays in the area is equal to the sum of the impacts of the individual arrays. Additionally, array implementation only has minimal effect on the baseline morphodynamics of the large sandbanks in the region, smaller more local sandbanks were not considered. These two results are extremely positive for tidal stream developers in the region since it removes the burden of assessing cumulative impact from individual developers and suggests that impacts to sub-sea morphodynamics is insignificant and hence is unlikely to be an impediment to development in the Pentland Firth with the currently proposed levels of extraction
    corecore