8 research outputs found

    Alignment of Memory Transfers of a Time-Predictable Stack Cache

    Get PDF
    N/AModern computer architectures use features which often com-plicate the WCET analysis of real-time software. Alterna-tive time-predictable designs, and in particular caches, thus are gaining more and more interest. A recently proposed stack cache, for instance, avoids the need for the analysis of complex cache states. Instead, only the occupancy level of the cache has to be determined. The memory transfers generated by the standard stack cache are not generally aligned. These unaligned accesses risk to introduce complexity to the otherwise simple WCET analysis. In this work, we investigate three different ap-proaches to handle the alignment problem in the stack cache: (1) unaligned transfers, (2) alignment through compiler-gen-erated padding, (3) a novel hardware extension ensuring the alignment of all transfers. Simulation results show that our hardware extension offers a good compromise between average-case performance and analysis complexity

    Lazy Spilling for a Time-Predictable Stack Cache: Implementation and Analysis

    Get PDF
    The growing complexity of modern computer architectures increasingly complicates the prediction of the run-time behavior of software. For real-time systems, where a safe estimation of the program\u27s worst-case execution time is needed, time-predictable computer architectures promise to resolve this problem. A stack cache, for instance, allows the compiler to efficiently cache a program\u27s stack, while static analysis of its behavior remains easy. Likewise, its implementation requires little hardware overhead. This work introduces an optimization of the standard stack cache to avoid redundant spilling of the cache content to main memory, if the content was not modified in the meantime. At first sight, this appears to be an average-case optimization. Indeed, measurements show that the number of cache blocks spilled is reduced to about 17% and 30% in the mean, depending on the stack cache size. Furthermore, we show that lazy spilling can be analyzed with little extra effort, which benefits the worst-case spilling behavior that is relevant for a real-time system

    Time-predictable Stack Caching

    Get PDF

    Eager Stack Cache Memory Transfers

    Get PDF
    The growing complexity of modern computer architectures increasingly complicates the prediction of the run-time behavior of software. For real-time systems, where a safe estimation of the program\u27s worst-case execution time is needed, time-predictable computer architectures promise to resolve this problem. The stack cache, for instance, allows the compiler to efficiently cache a program\u27s stack, while static analysis of its behavior remains easy. This work introduces an optimization of the stack cache that allows to anticipate memory transfers that might be initiated by future stack cache control instructions. These eager memory transfers thus allow to reduce the average-case latency of those control instructions, very similar to "prefetching" techniques known from conventional caches. However, the mechanism proposed here is guaranteed to have no impact on the worst-case execution time estimates computed by static analysis. Measurements on a dual-core platform using the Patmos processor and imedivision-multiplexing-based memory arbitration, show that our technique can eliminate up to 62% (7%) of the memory transfers from (respectively to) the stack cache on average over all programs of the MiBench benchmark suite

    Patmos: a time-predictable microprocessor

    Get PDF
    corecore