11 research outputs found

    A mazing 2+ε approximation for unsplittable flow on a path

    Get PDF
    We study the problem of unsplittable flow on a path (UFP), which arises naturally in many applications such as bandwidth allocation, job scheduling, and caching. Here we are given a path with nonnegative edge capacities and a set of tasks, which are characterized by a subpath, a demand, and a profit. The goal is to find the most profitable subset of tasks whose total demand does not violate the edge capacities. Not surprisingly, this problem has received a lot of attention in the research community. If the demand of each task is at most a small-enough fraction δ of the capacity along its subpath (δ-small tasks), then it has been known for a long time [Chekuri et al., ICALP 2003] how to compute a solution of value arbitrarily close to the optimum via LP rounding. However, much remains unknown for the complementary case, that is, when the demand of each task is at least some fraction δ > 0 of the smallest capacity of its subpath (δ-large tasks). For this setting, a constant factor approximation is known, improving on an earlier logarithmic approximation [Bonsma et al., FOCS 2011]. In this article, we present a polynomial-time approximation scheme (PTAS) for δ-large tasks, for any constant δ > 0. Key to this result is a complex geometrically inspired dynamic program. Each task is represented as a segment underneath the capacity curve, and we identify a proper maze-like structure so that each corridor of the maze is crossed by only O(1) tasks in the optimal solution. The maze has a tree topology, which guides our dynamic program. Our result implies a 2 + ε approximation for UFP, for any constant ε > 0, improving on the previously best 7 + ε approximation by Bonsma et al. We remark that our improved approximation algorithm matches the best known approximation ratio for the considerably easier special case of uniform edge capacities

    A Constant Factor Approximation Algorithm for Unsplittable Flow on Paths

    Get PDF
    In the unsplittable flow problem on a path, we are given a capacitated path PP and nn tasks, each task having a demand, a profit, and start and end vertices. The goal is to compute a maximum profit set of tasks, such that for each edge ee of PP, the total demand of selected tasks that use ee does not exceed the capacity of ee. This is a well-studied problem that has been studied under alternative names, such as resource allocation, bandwidth allocation, resource constrained scheduling, temporal knapsack and interval packing. We present a polynomial time constant-factor approximation algorithm for this problem. This improves on the previous best known approximation ratio of O(logn)O(\log n). The approximation ratio of our algorithm is 7+ϵ7+\epsilon for any ϵ>0\epsilon>0. We introduce several novel algorithmic techniques, which might be of independent interest: a framework which reduces the problem to instances with a bounded range of capacities, and a new geometrically inspired dynamic program which solves a special case of the maximum weight independent set of rectangles problem to optimality. In the setting of resource augmentation, wherein the capacities can be slightly violated, we give a (2+ϵ)(2+\epsilon)-approximation algorithm. In addition, we show that the problem is strongly NP-hard even if all edge capacities are equal and all demands are either~1,~2, or~3.Comment: 37 pages, 5 figures Version 2 contains the same results as version 1, but the presentation has been greatly revised and improved. References have been adde

    Paging on Complex Architectures

    Get PDF
    Advances in technology allow to build computer systems of ever increasing performances and capabilities. However, the effective use of such computational resources is often made difficult by the complexity of the system itself. Crucial to the performance of a computing device is the orchestration of the flow of data across the memory hierarchy. Specifically, given a fast but small memory (a cache) through which all the data that have to be processed must pass, it is necessary to establish a set of rules, then implemented by an algorithm, that define which data has to be evicted from such a memory to make room for new incoming data. The goal is that of minimizing the number of times that requested data is outside the cache (faults), since fetching data from farther levels of the memory hierarchy incurs high costs, in terms of time and also of energy. This thesis studies two generalizations of this problem, known as the paging problem. This problem is intrinsically online, as future data requests issued by a computer program are typically unknown. Motivated by the recent diffusion of multi-threaded and multi-core architectures, whereby several threads or processes can be executed simultaneously, and/or there are several processing units, and by the recent and rapidly growing interest in reducing power consumptions of computer systems, in the first part of the thesis we study a variation of paging which rewards the efficient usage of memory resources. In this problem the goal is that of minimizing a combination of both the number of faults and the cache occupancy of the process' data in fast memory. The main results of this part are two: the first is an impossibility result that indicates that, roughly speaking, online algorithms cannot compete in practice with algorithms that know in advance all the data requests issued by the process; the second is the design of an online algorithm that has almost the best performance among all the possible online algorithms. In the second part of the thesis we concentrate on the management of a cache shared among several concurrent processes. As outlined above, this has direct application in multi-threaded or multi-core architectures. In this problem the fast memory has to service a sequence of requests which is the interleaving of the requests issued by t different processes. Through its replacement decisions, the algorithm dynamically allocates the cache space among the processes, and this clearly impacts their progress. The main goal here is to minimize the time needed to complete the service of all the request sequences. We show tight lower and upper bounds on the performance of online algorithms for several variants of the problem

    Models for Parallel Computation in Multi-Core, Heterogeneous, and Ultra Wide-Word Architectures

    Get PDF
    Multi-core processors have become the dominant processor architecture with 2, 4, and 8 cores on a chip being widely available and an increasing number of cores predicted for the future. In addition, the decreasing costs and increasing programmability of Graphic Processing Units (GPUs) have made these an accessible source of parallel processing power in general purpose computing. Among the many research challenges that this scenario has raised are the fundamental problems related to theoretical modeling of computation in these architectures. In this thesis we study several aspects of computation in modern parallel architectures, from modeling of computation in multi-cores and heterogeneous platforms, to multi-core cache management strategies, through the proposal of an architecture that exploits bit-parallelism on thousands of bits. Observing that in practice multi-cores have a small number of cores, we propose a model for low-degree parallelism for these architectures. We argue that assuming a small number of processors (logarithmic in a problem's input size) simplifies the design of parallel algorithms. We show that in this model a large class of divide-and-conquer and dynamic programming algorithms can be parallelized with simple modifications to sequential programs, while achieving optimal parallel speedups. We further explore low-degree-parallelism in computation, providing evidence of fundamental differences in practice and theory between systems with a sublinear and linear number of processors, and suggesting a sharp theoretical gap between the classes of problems that are efficiently parallelizable in each case. Efficient strategies to manage shared caches play a crucial role in multi-core performance. We propose a model for paging in multi-core shared caches, which extends classical paging to a setting in which several threads share the cache. We show that in this setting traditional cache management policies perform poorly, and that any effective strategy must partition the cache among threads, with a partition that adapts dynamically to the demands of each thread. Inspired by the shared cache setting, we introduce the minimum cache usage problem, an extension to classical sequential paging in which algorithms must account for the amount of cache they use. This cache-aware model seeks algorithms with good performance in terms of faults and the amount of cache used, and has applications in energy efficient caching and in shared cache scenarios. The wide availability of GPUs has added to the parallel power of multi-cores, however, most applications underutilize the available resources. We propose a model for hybrid computation in heterogeneous systems with multi-cores and GPU, and describe strategies for generic parallelization and efficient scheduling of a large class of divide-and-conquer algorithms. Lastly, we introduce the Ultra-Wide Word architecture and model, an extension of the word-RAM model, that allows for constant time operations on thousands of bits in parallel. We show that a large class of existing algorithms can be implemented in the Ultra-Wide Word model, achieving speedups comparable to those of multi-threaded computations, while avoiding the more difficult aspects of parallel programming

    Caching is hard, even in the fault model

    No full text
    We prove strong NP-completeness for the four variants of caching with multi-size pages. These four variants are obtained by choosing either the fault cost or the bit cost model, and by combining it with either a forced or an optional caching policy. This resolves two questions in the area of paging and caching that were open since the 1990s

    Caching is hard - even in the fault model

    No full text
    We prove strong NP-completeness for the four variants of caching with multi-size pages. These four variants are obtained by choosing either the fault cost or the bit cost model, and by combining it with either a forced or an optional caching policy. This resolves two questions in the area of paging and caching that were open since the 1990s
    corecore