7 research outputs found
Fast, predictable and low energy memory references through architecture-aware compilation
The design of future high-performance embedded systems is hampered
by two problems: First, the required hardware needs more energy than is
available from batteries. Second, current cache-based approaches for bridging the
increasing speed gap between processors and memories cannot guarantee predictable
real-time behavior. A contribution to solving both problems is made in
this paper which describes a comprehensive set of algorithms that can be applied
at design time in order to maximally exploit scratch pad memories (SPMs). We
show that both the energy consumption as well as the computed worst case execution
time (WCET) can be reduced by up to to 80% and 48%, respectively, by
establishing a strong link between the memory architecture and the compiler
Compiler-optimized Usage of Partitioned Memories
In order to meet the requirements concerning both performance and energy consumption in embedded systems, new memory architectures are being introduced. Beside the well-known use of caches in the memory hierarchy, processor cores today also include small onchip memories called scratchpad memories whose usage is not controlled by hardware, but rather by the programmer or the compiler. Techniques for utilization of these scratchpads have been known for some time. Some new processors provide more than one scratchpad, making it necessary to enhance the workflow such that this complex memory architecture can be efficiently utilized. In this work, we present an energy model and an ILP formulation to optimally assign memory objects to different partitions of scratchpad memories at compile time, achieving energy savings of up to 22% compared to previous approaches
Fast, predictable and low energy memory references through architecture-aware compilation 1
The design of future high-performance embedded systems is hampered by two problems: First, the required hardware needs more energy than is available from batteries. Second, current cache-based approaches for bridging the increasing speed gap between processors and memories cannot guarantee predictable real-time behavior. A contribution to solving both problems is made in this paper which describes a comprehensive set of algorithms that can be applied at design time in order to maximally exploit scratch pad memories (SPMs). We show that both the energy consumption as well as the computed worst case execution time (WCET) can be reduced by up to to 80 % and 48%, respectively, by establishing a strong link between the memory architecture and the compiler. I