2 research outputs found

    Joint Cache Partition and Job Assignment on Multi-Core Processors

    Full text link
    Multicore shared cache processors pose a challenge for designers of embedded systems who try to achieve minimal and predictable execution time of workloads consisting of several jobs. To address this challenge the cache is statically partitioned among the cores and the jobs are assigned to the cores so as to minimize the makespan. Several heuristic algorithms have been proposed that jointly decide how to partition the cache among the cores and assign the jobs. We initiate a theoretical study of this problem which we call the joint cache partition and job assignment problem. By a careful analysis of the possible cache partitions we obtain a constant approximation algorithm for this problem. For some practical special cases we obtain a 2-approximation algorithm, and show how to improve the approximation factor even further by allowing the algorithm to use additional cache. We also study possible improvements that can be obtained by allowing dynamic cache partitions and dynamic job assignments. We define a natural special case of the well known scheduling problem on unrelated machines in which machines are ordered by "strength". Our joint cache partition and job assignment problem generalizes this scheduling problem which we think is of independent interest. We give a polynomial time algorithm for this scheduling problem for instances obtained by fixing the cache partition in a practical case of the joint cache partition and job assignment problem where job loads are step functions

    Paging on Complex Architectures

    Get PDF
    Advances in technology allow to build computer systems of ever increasing performances and capabilities. However, the effective use of such computational resources is often made difficult by the complexity of the system itself. Crucial to the performance of a computing device is the orchestration of the flow of data across the memory hierarchy. Specifically, given a fast but small memory (a cache) through which all the data that have to be processed must pass, it is necessary to establish a set of rules, then implemented by an algorithm, that define which data has to be evicted from such a memory to make room for new incoming data. The goal is that of minimizing the number of times that requested data is outside the cache (faults), since fetching data from farther levels of the memory hierarchy incurs high costs, in terms of time and also of energy. This thesis studies two generalizations of this problem, known as the paging problem. This problem is intrinsically online, as future data requests issued by a computer program are typically unknown. Motivated by the recent diffusion of multi-threaded and multi-core architectures, whereby several threads or processes can be executed simultaneously, and/or there are several processing units, and by the recent and rapidly growing interest in reducing power consumptions of computer systems, in the first part of the thesis we study a variation of paging which rewards the efficient usage of memory resources. In this problem the goal is that of minimizing a combination of both the number of faults and the cache occupancy of the process' data in fast memory. The main results of this part are two: the first is an impossibility result that indicates that, roughly speaking, online algorithms cannot compete in practice with algorithms that know in advance all the data requests issued by the process; the second is the design of an online algorithm that has almost the best performance among all the possible online algorithms. In the second part of the thesis we concentrate on the management of a cache shared among several concurrent processes. As outlined above, this has direct application in multi-threaded or multi-core architectures. In this problem the fast memory has to service a sequence of requests which is the interleaving of the requests issued by t different processes. Through its replacement decisions, the algorithm dynamically allocates the cache space among the processes, and this clearly impacts their progress. The main goal here is to minimize the time needed to complete the service of all the request sequences. We show tight lower and upper bounds on the performance of online algorithms for several variants of the problem
    corecore