25,965 research outputs found

    On-line load balancing

    Get PDF
    AbstractThe setup for our problem consists of n servers that must complete a set of tasks. Each task can be handled only by a subset of the servers, requires a different level of service, and once assigned cannot be reassigned. We make the natural assumption that the level of service is known at arrival time, but that the duration of service is not. The on-line load balancing problem is to assign each task to an appropriate server in such a way that the maximum load on the servers is minimized. In this paper we derive matching upper and lower bounds for the competitive ratio of the on-line greedy algorithm for this problem, namely, [(3n)23/2](1+o(1)), and derive a lower bound, Ω(n12), for any other deterministic or randomized on-line algorithm

    Tight Bounds for On-line Tree Embedding

    Get PDF
    Many tree–structured computations are inherently parallel. As leaf processes are recursively spawned they can be assigned to independent processors in a multicomputer network. To maintain load balance, an on–line mapping algorithm must distribute processes equitably among processors. Additionally, the algorithm itself must be distributed in nature, and process allocation must be completed via message–passing with minimal communication overhead. This paper investigates bounds on the performance of deterministic and randomized algorithms for on–line tree embedding. In particular, we study tradeoffs between performance (load–balance) and communication overhead (message congest ion). We give a simple technique to derive lower bounds on the congestion that any on–line allocation algorithm must incur in order to guarantee load balance. This technique works for both randomized and deterministic algorithms, although we find that the performance of randomized on-line algorithms to be somewhat better than that of deterministic algorithms. Optimal bounds are achieved for several networks including multi–dimensional grids and butterflies

    The influence of line balancing on line feeding for mixed-model assembly lines

    Get PDF
    Though, recent research on mixed-model Assembly Line Balancing Problems (MALBP) and Assembly Line Feeding Problems (ALFP) aims to incorporate real-world aspects, research on the integration of both areas is still limited. This paper helps closing this gap by studying the influence of different balancing objectives on line feeding decisions and costs. For line balancing, different objective functions were formulated and the results were used as input when solving the ALFP. Although, no large cost differences were found, we observed that decision making in line feeding does depend on the balance

    Asymptotically Optimal Load Balancing Topologies

    Full text link
    We consider a system of NN servers inter-connected by some underlying graph topology GNG_N. Tasks arrive at the various servers as independent Poisson processes of rate λ\lambda. Each incoming task is irrevocably assigned to whichever server has the smallest number of tasks among the one where it appears and its neighbors in GNG_N. Tasks have unit-mean exponential service times and leave the system upon service completion. The above model has been extensively investigated in the case GNG_N is a clique. Since the servers are exchangeable in that case, the queue length process is quite tractable, and it has been proved that for any λ<1\lambda < 1, the fraction of servers with two or more tasks vanishes in the limit as N→∞N \to \infty. For an arbitrary graph GNG_N, the lack of exchangeability severely complicates the analysis, and the queue length process tends to be worse than for a clique. Accordingly, a graph GNG_N is said to be NN-optimal or N\sqrt{N}-optimal when the occupancy process on GNG_N is equivalent to that on a clique on an NN-scale or N\sqrt{N}-scale, respectively. We prove that if GNG_N is an Erd\H{o}s-R\'enyi random graph with average degree d(N)d(N), then it is with high probability NN-optimal and N\sqrt{N}-optimal if d(N)→∞d(N) \to \infty and d(N)/(Nlog⁥(N))→∞d(N) / (\sqrt{N} \log(N)) \to \infty as N→∞N \to \infty, respectively. This demonstrates that optimality can be maintained at NN-scale and N\sqrt{N}-scale while reducing the number of connections by nearly a factor NN and N/log⁥(N)\sqrt{N} / \log(N) compared to a clique, provided the topology is suitably random. It is further shown that if GNG_N contains Θ(N)\Theta(N) bounded-degree nodes, then it cannot be NN-optimal. In addition, we establish that an arbitrary graph GNG_N is NN-optimal when its minimum degree is N−o(N)N - o(N), and may not be NN-optimal even when its minimum degree is cN+o(N)c N + o(N) for any 0<c<1/20 < c < 1/2.Comment: A few relevant results from arXiv:1612.00723 are included for convenienc

    Reallocation Problems in Scheduling

    Full text link
    In traditional on-line problems, such as scheduling, requests arrive over time, demanding available resources. As each request arrives, some resources may have to be irrevocably committed to servicing that request. In many situations, however, it may be possible or even necessary to reallocate previously allocated resources in order to satisfy a new request. This reallocation has a cost. This paper shows how to service the requests while minimizing the reallocation cost. We focus on the classic problem of scheduling jobs on a multiprocessor system. Each unit-size job has a time window in which it can be executed. Jobs are dynamically added and removed from the system. We provide an algorithm that maintains a valid schedule, as long as a sufficiently feasible schedule exists. The algorithm reschedules only a total number of O(min{log^* n, log^* Delta}) jobs for each job that is inserted or deleted from the system, where n is the number of active jobs and Delta is the size of the largest window.Comment: 9 oages, 1 table; extended abstract version to appear in SPAA 201

    Parallel Load Balancing on Constrained Client-Server Topologies

    Get PDF
    We study parallel \emph{Load Balancing} protocols for a client-server distributed model defined as follows. There is a set \sC of nn clients and a set \sS of nn servers where each client has (at most) a constant number d≄1d \geq 1 of requests that must be assigned to some server. The client set and the server one are connected to each other via a fixed bipartite graph: the requests of client vv can only be sent to the servers in its neighborhood N(v)N(v). The goal is to assign every client request so as to minimize the maximum load of the servers. In this setting, efficient parallel protocols are available only for dense topolgies. In particular, a simple symmetric, non-adaptive protocol achieving constant maximum load has been recently introduced by Becchetti et al \cite{BCNPT18} for regular dense bipartite graphs. The parallel completion time is \bigO(\log n) and the overall work is \bigO(n), w.h.p. Motivated by proximity constraints arising in some client-server systems, we devise a simple variant of Becchetti et al's protocol \cite{BCNPT18} and we analyse it over almost-regular bipartite graphs where nodes may have neighborhoods of small size. In detail, we prove that, w.h.p., this new version has a cost equivalent to that of Becchetti et al's protocol (in terms of maximum load, completion time, and work complexity, respectively) on every almost-regular bipartite graph with degree Ω(log⁥2n)\Omega(\log^2n). Our analysis significantly departs from that in \cite{BCNPT18} for the original protocol and requires to cope with non-trivial stochastic-dependence issues on the random choices of the algorithmic process which are due to the worst-case, sparse topology of the underlying graph

    Online Algorithms for Geographical Load Balancing

    Get PDF
    It has recently been proposed that Internet energy costs, both monetary and environmental, can be reduced by exploiting temporal variations and shifting processing to data centers located in regions where energy currently has low cost. Lightly loaded data centers can then turn off surplus servers. This paper studies online algorithms for determining the number of servers to leave on in each data center, and then uses these algorithms to study the environmental potential of geographical load balancing (GLB). A commonly suggested algorithm for this setting is “receding horizon control” (RHC), which computes the provisioning for the current time by optimizing over a window of predicted future loads. We show that RHC performs well in a homogeneous setting, in which all servers can serve all jobs equally well; however, we also prove that differences in propagation delays, servers, and electricity prices can cause RHC perform badly, So, we introduce variants of RHC that are guaranteed to perform as well in the face of such heterogeneity. These algorithms are then used to study the feasibility of powering a continent-wide set of data centers mostly by renewable sources, and to understand what portfolio of renewable energy is most effective
    • 

    corecore