40,342 research outputs found
An EPTAS for Scheduling on Unrelated Machines of Few Different Types
In the classical problem of scheduling on unrelated parallel machines, a set
of jobs has to be assigned to a set of machines. The jobs have a processing
time depending on the machine and the goal is to minimize the makespan, that is
the maximum machine load. It is well known that this problem is NP-hard and
does not allow polynomial time approximation algorithms with approximation
guarantees smaller than unless PNP. We consider the case that there
are only a constant number of machine types. Two machines have the same
type if all jobs have the same processing time for them. This variant of the
problem is strongly NP-hard already for . We present an efficient
polynomial time approximation scheme (EPTAS) for the problem, that is, for any
an assignment with makespan of length at most
times the optimum can be found in polynomial time in the
input length and the exponent is independent of . In particular
we achieve a running time of , where
denotes the input length. Furthermore, we study three other problem
variants and present an EPTAS for each of them: The Santa Claus problem, where
the minimum machine load has to be maximized; the case of scheduling on
unrelated parallel machines with a constant number of uniform types, where
machines of the same type behave like uniformly related machines; and the
multidimensional vector scheduling variant of the problem where both the
dimension and the number of machine types are constant. For the Santa Claus
problem we achieve the same running time. The results are achieved, using mixed
integer linear programming and rounding techniques
The Submodular Santa Claus Problem in the Restricted Assignment Case
The submodular Santa Claus problem was introduced in a seminal work by Goemans, Harvey, Iwata, and Mirrokni (SODA\u2709) as an application of their structural result. In the mentioned problem n unsplittable resources have to be assigned to m players, each with a monotone submodular utility function f_i. The goal is to maximize min_i f_i(S_i) where S?,...,S_m is a partition of the resources. The result by Goemans et al. implies a polynomial time O(n^{1/2 +?})-approximation algorithm.
Since then progress on this problem was limited to the linear case, that is, all f_i are linear functions. In particular, a line of research has shown that there is a polynomial time constant approximation algorithm for linear valuation functions in the restricted assignment case. This is the special case where each player is given a set of desired resources ?_i and the individual valuation functions are defined as f_i(S) = f(S ? ?_i) for a global linear function f. This can also be interpreted as maximizing min_i f(S_i) with additional assignment restrictions, i.e., resources can only be assigned to certain players.
In this paper we make comparable progress for the submodular variant: If f is a monotone submodular function, we can in polynomial time compute an O(log log(n))-approximate solution
Additive Approximation Schemes for Load Balancing Problems
We formalize the concept of additive approximation schemes and apply it to load balancing problems on identical machines. Additive approximation schemes compute a solution with an absolute error in the objective of at most ? h for some suitable parameter h and any given ? > 0. We consider the problem of assigning jobs to identical machines with respect to common load balancing objectives like makespan minimization, the Santa Claus problem (on identical machines), and the envy-minimizing Santa Claus problem. For these settings we present additive approximation schemes for h = p_{max}, the maximum processing time of the jobs.
Our technical contribution is two-fold. First, we introduce a new relaxation based on integrally assigning slots to machines and fractionally assigning jobs to the slots. We refer to this relaxation as the slot-MILP. While it has a linear number of integral variables, we identify structural properties of (near-)optimal solutions, which allow us to compute those in polynomial time. The second technical contribution is a local-search algorithm which rounds any given solution to the slot-MILP, introducing an additive error on the machine loads of at most ?? p_{max}
What can economics teach us about Santa Claus?
In this paper we sketch a theory about the role of supernatural beliefs in incentivizing "good" behavior among children by parents. We present a simple theory on the production and the use of certain supernatural beliefs by parents to influence their children’s behavior. A prime example of this is the idea of Santa Claus and the idea that Santa Claus rewards children according to how well they have behaved during the year. We show that under standard conditions parents face a time inconsistency problem when trying to incentivize their offspring. We claim that the production of beliefs in certain supernatural or quasi-supernatural persons who allegedly have infinite lives can help parents discipline their children. Finally, we extend this logic to a community and its ruler or rulers. We show that rulers can have incentives to influence the beliefs of their subjects. This incentive is greater whenever the ruler is a monopolist and when he or she expects to rule for a long period. Rulers with limited ability and/or superior technology for producing beliefs will also supply more supernatural stories to enforce their rule
A General Framework for Learning-Augmented Online Allocation
Online allocation is a broad class of problems where items arriving online
have to be allocated to agents who have a fixed utility/cost for each assigned
item so to maximize/minimize some objective. This framework captures a broad
range of fundamental problems such as the Santa Claus problem (maximizing
minimum utility), Nash welfare maximization (maximizing geometric mean of
utilities), makespan minimization (minimizing maximum cost), minimization of
-norms, and so on. We focus on divisible items (i.e., fractional
allocations) in this paper. Even for divisible items, these problems are
characterized by strong super-constant lower bounds in the classical worst-case
online model.
In this paper, we study online allocations in the {\em learning-augmented}
setting, i.e., where the algorithm has access to some additional
(machine-learned) information about the problem instance. We introduce a {\em
general} algorithmic framework for learning-augmented online allocation that
produces nearly optimal solutions for this broad range of maximization and
minimization objectives using only a single learned parameter for every agent.
As corollaries of our general framework, we improve prior results of Lattanzi
et al. (SODA 2020) and Li and Xian (ICML 2021) for learning-augmented makespan
minimization, and obtain the first learning-augmented nearly-optimal algorithms
for the other objectives such as Santa Claus, Nash welfare,
-minimization, etc. We also give tight bounds on the resilience of our
algorithms to errors in the learned parameters, and study the learnability of
these parameters
A General Framework for Learning-Augmented Online Allocation
Online allocation is a broad class of problems where items arriving online have to be allocated to agents who have a fixed utility/cost for each assigned item so to maximize/minimize some objective. This framework captures a broad range of fundamental problems such as the Santa Claus problem (maximizing minimum utility), Nash welfare maximization (maximizing geometric mean of utilities), makespan minimization (minimizing maximum cost), minimization of ?_p-norms, and so on. We focus on divisible items (i.e., fractional allocations) in this paper. Even for divisible items, these problems are characterized by strong super-constant lower bounds in the classical worst-case online model.
In this paper, we study online allocations in the learning-augmented setting, i.e., where the algorithm has access to some additional (machine-learned) information about the problem instance. We introduce a general algorithmic framework for learning-augmented online allocation that produces nearly optimal solutions for this broad range of maximization and minimization objectives using only a single learned parameter for every agent. As corollaries of our general framework, we improve prior results of Lattanzi et al. (SODA 2020) and Li and Xian (ICML 2021) for learning-augmented makespan minimization, and obtain the first learning-augmented nearly-optimal algorithms for the other objectives such as Santa Claus, Nash welfare, ?_p-minimization, etc. We also give tight bounds on the resilience of our algorithms to errors in the learned parameters, and study the learnability of these parameters
- …