1 research outputs found

    Achieving Predictable Performance with On-Chip Shared L2 Caches for Manycore-Based Real-Time Systems

    No full text
    Doubling the number of processing cores on a single processor chip with each technology generation has become conventional wisdom. While future manycore processors promise to offer much increased computational throughput under a given power envelope, sharing critical on-chip resources, such as caches and coreto-core interconnects, poses challenges to guaranteeing predictable performance to an application program. This paper focuses on the problem of sharing on-chip caching capacity among multiple programs scheduled together, especially at the L2 cache level. Specifically, two design aspects of a large shared L2 cache are considered: (1) non-uniform cache access latency and (2) cache contention. We observe that both the aspects have to do with where, among many cache slices, a cache block is mapped to, and present an OS-based approach to managing the on-chip L2 cache memory by carefully mapping data to a cache at the page granularity. We show that a reasonable extension to the OS memory management subsystem and simple architectural support enable enforcing high-level policies to achieve application performance isolation and improve program performance predictability thereof
    corecore