16 research outputs found

    Improved approximation bounds for Vector Bin Packing

    Full text link
    In this paper we propose an improved approximation scheme for the Vector Bin Packing problem (VBP), based on the combination of (near-)optimal solution of the Linear Programming (LP) relaxation and a greedy (modified first-fit) heuristic. The Vector Bin Packing problem of higher dimension (d \geq 2) is not known to have asymptotic polynomial-time approximation schemes (unless P = NP). Our algorithm improves over the previously-known guarantee of (ln d + 1 + epsilon) by Bansal et al. [1] for higher dimensions (d > 2). We provide a {\theta}(1) approximation scheme for certain set of inputs for any dimension d. More precisely, we provide a 2-OPT algorithm, a result which is irrespective of the number of dimensions d.Comment: 15 pages, 3 algorithm

    Optimal Placement Algorithms for Virtual Machines

    Full text link
    Cloud computing provides a computing platform for the users to meet their demands in an efficient, cost-effective way. Virtualization technologies are used in the clouds to aid the efficient usage of hardware. Virtual machines (VMs) are utilized to satisfy the user needs and are placed on physical machines (PMs) of the cloud for effective usage of hardware resources and electricity in the cloud. Optimizing the number of PMs used helps in cutting down the power consumption by a substantial amount. In this paper, we present an optimal technique to map virtual machines to physical machines (nodes) such that the number of required nodes is minimized. We provide two approaches based on linear programming and quadratic programming techniques that significantly improve over the existing theoretical bounds and efficiently solve the problem of virtual machine (VM) placement in data centers

    Vector Bin Packing with Multiple-Choice

    Full text link
    We consider a variant of bin packing called multiple-choice vector bin packing. In this problem we are given a set of items, where each item can be selected in one of several DD-dimensional incarnations. We are also given TT bin types, each with its own cost and DD-dimensional size. Our goal is to pack the items in a set of bins of minimum overall cost. The problem is motivated by scheduling in networks with guaranteed quality of service (QoS), but due to its general formulation it has many other applications as well. We present an approximation algorithm that is guaranteed to produce a solution whose cost is about lnD\ln D times the optimum. For the running time to be polynomial we require D=O(1)D=O(1) and T=O(logn)T=O(\log n). This extends previous results for vector bin packing, in which each item has a single incarnation and there is only one bin type. To obtain our result we also present a PTAS for the multiple-choice version of multidimensional knapsack, where we are given only one bin and the goal is to pack a maximum weight set of (incarnations of) items in that bin

    Locality-preserving allocations Problems and coloured Bin Packing

    Get PDF
    We study the following problem, introduced by Chung et al. in 2006. We are given, online or offline, a set of coloured items of different sizes, and wish to pack them into bins of equal size so that we use few bins in total (at most α\alpha times optimal), and that the items of each colour span few bins (at most β\beta times optimal). We call such allocations (α,β)(\alpha, \beta)-approximate. As usual in bin packing problems, we allow additive constants and consider (α,β)(\alpha,\beta) as the asymptotic performance ratios. We prove that for \eps>0, if we desire small α\alpha, no scheme can beat (1+\eps, \Omega(1/\eps))-approximate allocations and similarly as we desire small β\beta, no scheme can beat (1.69103, 1+\eps)-approximate allocations. We give offline schemes that come very close to achieving these lower bounds. For the online case, we prove that no scheme can even achieve (O(1),O(1))(O(1),O(1))-approximate allocations. However, a small restriction on item sizes permits a simple online scheme that computes (2+\eps, 1.7)-approximate allocations

    Packing Sporadic Real-Time Tasks on Identical Multiprocessor Systems

    Get PDF
    In real-time systems, in addition to the functional correctness recurrent tasks must fulfill timing constraints to ensure the correct behavior of the system. Partitioned scheduling is widely used in real-time systems, i.e., the tasks are statically assigned onto processors while ensuring that all timing constraints are met. The decision version of the problem, which is to check whether the deadline constraints of tasks can be satisfied on a given number of identical processors, has been known NP{\cal NP}-complete in the strong sense. Several studies on this problem are based on approximations involving resource augmentation, i.e., speeding up individual processors. This paper studies another type of resource augmentation by allocating additional processors, a topic that has not been explored until recently. We provide polynomial-time algorithms and analysis, in which the approximation factors are dependent upon the input instances. Specifically, the factors are related to the maximum ratio of the period to the relative deadline of a task in the given task set. We also show that these algorithms unfortunately cannot achieve a constant approximation factor for general cases. Furthermore, we prove that the problem does not admit any asymptotic polynomial-time approximation scheme (APTAS) unless P=NP{\cal P}={\cal NP} when the task set has constrained deadlines, i.e., the relative deadline of a task is no more than the period of the task.Comment: Accepted and to appear in ISAAC 2018, Yi-Lan, Taiwa

    Packing sporadic real-time tasks on identical multiprocessor systems

    Get PDF
    In real-time systems, in addition to the functional correctness recurrent tasks must fulfill timing constraints to ensure the correct behavior of the system. Partitioned scheduling is widely used in real-time systems, i.e., the tasks are statically assigned onto processors while ensuring that all timing constraints are met. The decision version of the problem, which is to check whether the deadline constraints of tasks can be satisfied on a given number of identical processors, has been known NP-complet

    Language and Compiler Support for Auto-Tuning Variable-Accuracy Algorithms

    Get PDF
    Approximating ideal program outputs is a common technique for solving computationally difficult problems, for adhering to processing or timing constraints, and for performance optimization in situations where perfect precision is not necessary. To this end, programmers often use approximation algorithms, iterative methods, data resampling, and other heuristics. However, programming such variable accuracy algorithms presents difficult challenges since the optimal algorithms and parameters may change with different accuracy requirements and usage environments. This problem is further compounded when multiple variable accuracy algorithms are nested together due to the complex way that accuracy requirements can propagate across algorithms and because of the size of the set of allowable compositions. As a result, programmers often deal with this issue in an ad-hoc manner that can sometimes violate sound programming practices such as maintaining library abstractions. In this paper, we propose language extensions that expose trade-offs between time and accuracy to the compiler. The compiler performs fully automatic compile-time and installtime autotuning and analyses in order to construct optimized algorithms to achieve any given target accuracy. We present novel compiler techniques and a structured genetic tuning algorithm to search the space of candidate algorithms and accuracies in the presence of recursion and sub-calls to other variable accuracy code. These techniques benefit both the library writer, by providing an easy way to describe and search the parameter and algorithmic choice space, and the library user, by allowing high level specification of accuracy requirements which are then met automatically without the need for the user to understand any algorithm-specific parameters. Additionally, we present a new suite of benchmarks, written in our language, to examine the efficacy of our techniques. Our experimental results show that by relaxing accuracy requirements , we can easily obtain performance improvements ranging from 1.1× to orders of magnitude of speedup

    Language and Compiler Support for Auto-Tuning Variable-Accuracy Algorithms

    Get PDF
    Approximating ideal program outputs is a common technique for solving computationally difficult problems, for adhering to processing or timing constraints, and for performance optimization in situations where perfect precision is not necessary. To this end, programmers often use approximation algorithms, iterative methods, data resampling, and other heuristics. However, programming such variable accuracy algorithms presents difficult challenges since the optimal algorithms and parameters may change with different accuracy requirements and usage environments. This problem is further compounded when multiple variable accuracy algorithms are nested together due to the complex way that accuracy requirements can propagate across algorithms and because of the resulting size of the set of allowable compositions. As a result, programmers often deal with this issue in an ad-hoc manner that can sometimes violate sound programming practices such as maintaining library abstractions. In this paper, we propose language extensions that expose trade-offs between time and accuracy to the compiler. The compiler performs fully automatic compile-time and install-time autotuning and analyses in order to construct optimized algorithms to achieve any given target accuracy. We present novel compiler techniques and a structured genetic tuning algorithm to search the space of candidate algorithms and accuracies in the presence of recursion and sub-calls to other variable accuracy code. These techniques benefit both the library writer, by providing an easy way to describe and search the parameter and algorithmic choice space, and the library user, by allowing high level specification of accuracy requirements which are then met automatically without the need for the user to understand any algorithm-specific parameters. Additionally, we present a new suite of benchmarks, written in our language, to examine the efficacy of our techniques. Our experimental results show that by relaxing accuracy requirements, we can easily obtain performance improvements ranging from 1.1x to orders of magnitude of speedup
    corecore