10,033 research outputs found

    RL-QN: A Reinforcement Learning Framework for Optimal Control of Queueing Systems

    Full text link
    With the rapid advance of information technology, network systems have become increasingly complex and hence the underlying system dynamics are often unknown or difficult to characterize. Finding a good network control policy is of significant importance to achieve desirable network performance (e.g., high throughput or low delay). In this work, we consider using model-based reinforcement learning (RL) to learn the optimal control policy for queueing networks so that the average job delay (or equivalently the average queue backlog) is minimized. Traditional approaches in RL, however, cannot handle the unbounded state spaces of the network control problem. To overcome this difficulty, we propose a new algorithm, called Reinforcement Learning for Queueing Networks (RL-QN), which applies model-based RL methods over a finite subset of the state space, while applying a known stabilizing policy for the rest of the states. We establish that the average queue backlog under RL-QN with an appropriately constructed subset can be arbitrarily close to the optimal result. We evaluate RL-QN in dynamic server allocation, routing and switching problems. Simulation results show that RL-QN minimizes the average queue backlog effectively

    On deciding stability of multiclass queueing networks under buffer priority scheduling policies

    Full text link
    One of the basic properties of a queueing network is stability. Roughly speaking, it is the property that the total number of jobs in the network remains bounded as a function of time. One of the key questions related to the stability issue is how to determine the exact conditions under which a given queueing network operating under a given scheduling policy remains stable. While there was much initial progress in addressing this question, most of the results obtained were partial at best and so the complete characterization of stable queueing networks is still lacking. In this paper, we resolve this open problem, albeit in a somewhat unexpected way. We show that characterizing stable queueing networks is an algorithmically undecidable problem for the case of nonpreemptive static buffer priority scheduling policies and deterministic interarrival and service times. Thus, no constructive characterization of stable queueing networks operating under this class of policies is possible. The result is established for queueing networks with finite and infinite buffer sizes and possibly zero service times, although we conjecture that it also holds in the case of models with only infinite buffers and nonzero service times. Our approach extends an earlier related work [Math. Oper. Res. 27 (2002) 272--293] and uses the so-called counter machine device as a reduction tool.Comment: Published in at http://dx.doi.org/10.1214/09-AAP597 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    When Backpressure Meets Predictive Scheduling

    Full text link
    Motivated by the increasing popularity of learning and predicting human user behavior in communication and computing systems, in this paper, we investigate the fundamental benefit of predictive scheduling, i.e., predicting and pre-serving arrivals, in controlled queueing systems. Based on a lookahead window prediction model, we first establish a novel equivalence between the predictive queueing system with a \emph{fully-efficient} scheduling scheme and an equivalent queueing system without prediction. This connection allows us to analytically demonstrate that predictive scheduling necessarily improves system delay performance and can drive it to zero with increasing prediction power. We then propose the \textsf{Predictive Backpressure (PBP)} algorithm for achieving optimal utility performance in such predictive systems. \textsf{PBP} efficiently incorporates prediction into stochastic system control and avoids the great complication due to the exponential state space growth in the prediction window size. We show that \textsf{PBP} can achieve a utility performance that is within O(ϵ)O(\epsilon) of the optimal, for any ϵ>0\epsilon>0, while guaranteeing that the system delay distribution is a \emph{shifted-to-the-left} version of that under the original Backpressure algorithm. Hence, the average packet delay under \textsf{PBP} is strictly better than that under Backpressure, and vanishes with increasing prediction window size. This implies that the resulting utility-delay tradeoff with predictive scheduling beats the known optimal [O(ϵ),O(log(1/ϵ))][O(\epsilon), O(\log(1/\epsilon))] tradeoff for systems without prediction

    The Network Effects of Prefetching

    Full text link
    Prefetching has been shown to be an effective technique for reducing user perceived latency in distributed systems. In this paper we show that even when prefetching adds no extra traffic to the network, it can have serious negative performance effects. Straightforward approaches to prefetching increase the burstiness of individual sources, leading to increased average queue sizes in network switches. However, we also show that applications can avoid the undesirable queueing effects of prefetching. In fact, we show that applications employing prefetching can significantly improve network performance, to a level much better than that obtained without any prefetching at all. This is because prefetching offers increased opportunities for traffic shaping that are not available in the absence of prefetching. Using a simple transport rate control mechanism, a prefetching application can modify its behavior from a distinctly ON/OFF entity to one whose data transfer rate changes less abruptly, while still delivering all data in advance of the user's actual requests
    corecore