11,261 research outputs found

    Delay, memory, and messaging tradeoffs in distributed service systems

    Get PDF
    We consider the following distributed service model: jobs with unit mean, exponentially distributed, and independent processing times arrive as a Poisson process of rate λn\lambda n, with 0<λ<10<\lambda<1, and are immediately dispatched by a centralized dispatcher to one of nn First-In-First-Out queues associated with nn identical servers. The dispatcher is endowed with a finite memory, and with the ability to exchange messages with the servers. We propose and study a resource-constrained "pull-based" dispatching policy that involves two parameters: (i) the number of memory bits available at the dispatcher, and (ii) the average rate at which servers communicate with the dispatcher. We establish (using a fluid limit approach) that the asymptotic, as nn\to\infty, expected queueing delay is zero when either (i) the number of memory bits grows logarithmically with nn and the message rate grows superlinearly with nn, or (ii) the number of memory bits grows superlogarithmically with nn and the message rate is at least λn\lambda n. Furthermore, when the number of memory bits grows only logarithmically with nn and the message rate is proportional to nn, we obtain a closed-form expression for the (now positive) asymptotic delay. Finally, we demonstrate an interesting phase transition in the resource-constrained regime where the asymptotic delay is non-zero. In particular, we show that for any given α>0\alpha>0 (no matter how small), if our policy only uses a linear message rate αn\alpha n, the resulting asymptotic delay is upper bounded, uniformly over all λ<1\lambda<1; this is in sharp contrast to the delay obtained when no messages are used (α=0\alpha = 0), which grows as 1/(1λ)1/(1-\lambda) when λ1\lambda\uparrow 1, or when the popular power-of-dd-choices is used, in which the delay grows as log(1/(1λ))\log(1/(1-\lambda))
    corecore