Aggressive prefetching: an idea whose time has come

Abstract

Abstract I/O prefetching serves to hide the latency of slow pe-ripheral devices. Traditional OS-level prefetching strategies have tended to be conservative, fetching only thosedata that are very likely to be needed according to some simple heuristic, and only just in time for them to ar-rive before the first access. More aggressive policies, which might speculate more about which data to fetch, or fetch them earlier in time, have typically not beenconsidered a prudent use of computational, memory, or bandwidth resources. We argue, however, that techno-logical trends and emerging system design goals have dramatically reduced the potential costs and dramati-cally increased the potential benefits of highly aggressive prefetching policies. We propose that memory management be redesigned to embrace such policies. 1 Introduction Prefetching, also known as prepaging or read-ahead, hasbeen standard practice in operating systems for more than thirty years. It complements traditional cachingpolicies, such as LRU, by hiding or reducing the latency of access to non-cached data. Its goal is to predict futuredata accesses and make data available in memory before they are requested. A common debate about prefetching concerns how ag-gressive it should be. Prefetching aggressiveness ma

    Similar works

    Full text

    thumbnail-image

    Available Versions