Location of Repository

An Energy Efficient GPGPU Memory Hierarchy with Tiny Incoherent Caches

By Alamelu Sankaranarayanan, Ehsan K. Ardestani, Jose Luis Briz and Jose Renau

Abstract

With progressive generations and the ever-increasing promise of computing power, GPGPUs have been quickly growing in size, and at the same time, energy consumption has become a major bottleneck for them. The first level data cache and the scratchpad memory are critical to the performance of a GPGPU, but they are extremely energy inefficient due to the large number of cores they need to serve. This problem could be mitigated by introducing a cache higher up in hierarchy that services fewer cores, but this introduces cache coherency issues that may become very significant, especially for a GPGPU with hundreds of thousands of in-flight threads. In this paper, we propose adding incoherent tinyCaches between each lane in an SM, and the first level data cache that is currently shared by all the lanes in an SM. In a normal multiprocessor, this would require hardware cache coherence between all the SM lanes capable of handling hundreds of thousands of threads. Our incoherent tinyCache architecture exploits certain unique features of the CUDA/OpenCL programming model to avoid complex coherence schemes. This tinyCache is able to filter out 62 % of memory requests that would otherwise need to be serviced by the DL1G, and almost 81 % of scratchpad memory requests, allowing us to achieve a 37 % energy reduction in the on-chip memory hierarchy. We evaluate the tinyCache for different memory patterns and show that it is beneficial in most cases

Topics: Energy-efficiency, Memory hierarchy, Caches
Year: 2013
OAI identifier: oai:CiteSeerX.psu:10.1.1.363.5464
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://masc.soe.ucsc.edu/docs/... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.