584 research outputs found
Selfish Bin Covering
In this paper, we address the selfish bin covering problem, which is greatly
related both to the bin covering problem, and to the weighted majority game.
What we mainly concern is how much the lack of coordination harms the social
welfare. Besides the standard PoA and PoS, which are based on Nash equilibrium,
we also take into account the strong Nash equilibrium, and several other new
equilibria. For each equilibrium, the corresponding PoA and PoS are given, and
the problems of computing an arbitrary equilibrium, as well as approximating
the best one, are also considered.Comment: 16 page
Optimizing egalitarian performance in the side-effects model of colocation for data center resource management
In data centers, up to dozens of tasks are colocated on a single physical
machine. Machines are used more efficiently, but tasks' performance
deteriorates, as colocated tasks compete for shared resources. As tasks are
heterogeneous, the resulting performance dependencies are complex. In our
previous work [18] we proposed a new combinatorial optimization model that uses
two parameters of a task - its size and its type - to characterize how a task
influences the performance of other tasks allocated to the same machine.
In this paper, we study the egalitarian optimization goal: maximizing the
worst-off performance. This problem generalizes the classic makespan
minimization on multiple processors (P||Cmax). We prove that
polynomially-solvable variants of multiprocessor scheduling are NP-hard and
hard to approximate when the number of types is not constant. For a constant
number of types, we propose a PTAS, a fast approximation algorithm, and a
series of heuristics. We simulate the algorithms on instances derived from a
trace of one of Google clusters. Algorithms aware of jobs' types lead to better
performance compared with algorithms solving P||Cmax.
The notion of type enables us to model degeneration of performance caused by
using standard combinatorial optimization methods. Types add a layer of
additional complexity. However, our results - approximation algorithms and good
average-case performance - show that types can be handled efficiently.Comment: Author's version of a paper published in Euro-Par 2017 Proceedings,
extends the published paper with addtional results and proof
Automated problem scheduling and reduction of synchronization delay effects
It is anticipated that in order to make effective use of many future high performance architectures, programs will have to exhibit at least a medium grained parallelism. A framework is presented for partitioning very sparse triangular systems of linear equations that is designed to produce favorable preformance results in a wide variety of parallel architectures. Efficient methods for solving these systems are of interest because: (1) they provide a useful model problem for use in exploring heuristics for the aggregation, mapping and scheduling of relatively fine grained computations whose data dependencies are specified by directed acrylic graphs, and (2) because such efficient methods can find direct application in the development of parallel algorithms for scientific computation. Simple expressions are derived that describe how to schedule computational work with varying degrees of granularity. The Encore Multimax was used as a hardware simulator to investigate the performance effects of using the partitioning techniques presented in shared memory architectures with varying relative synchronization costs
Multilevel Hypergraph Partitioning with Vertex Weights Revisited
The balanced hypergraph partitioning problem (HGP) is to partition the vertex set of a hypergraph into k disjoint blocks of bounded weight, while minimizing an objective function defined on the hyperedges. Whereas real-world applications often use vertex and edge weights to accurately model the underlying problem, the HGP research community commonly works with unweighted instances.
In this paper, we argue that, in the presence of vertex weights, current balance constraint definitions either yield infeasible partitioning problems or allow unnecessarily large imbalances and propose a new definition that overcomes these problems. We show that state-of-the-art hypergraph partitioners often struggle considerably with weighted instances and tight balance constraints (even with our new balance definition). Thus, we present a recursive-bipartitioning technique that is able to reliably compute balanced (and hence feasible) solutions. The proposed method balances the partition by pre-assigning a small subset of the heaviest vertices to the two blocks of each bipartition (using an algorithm originally developed for the job scheduling problem) and optimizes the actual partitioning objective on the remaining vertices. We integrate our algorithm into the multilevel hypergraph partitioner KaHyPar and show that our approach is able to compute balanced partitions of high quality on a diverse set of benchmark instances
- …