3,048 research outputs found

    State space collapse and diffusion approximation for a network operating under a fair bandwidth sharing policy

    Full text link
    We consider a connection-level model of Internet congestion control, introduced by Massouli\'{e} and Roberts [Telecommunication Systems 15 (2000) 185--201], that represents the randomly varying number of flows present in a network. Here, bandwidth is shared fairly among elastic document transfers according to a weighted α\alpha-fair bandwidth sharing policy introduced by Mo and Walrand [IEEE/ACM Transactions on Networking 8 (2000) 556--567] [α(0,)\alpha\in (0,\infty)]. Assuming Poisson arrivals and exponentially distributed document sizes, we focus on the heavy traffic regime in which the average load placed on each resource is approximately equal to its capacity. A fluid model (or functional law of large numbers approximation) for this stochastic model was derived and analyzed in a prior work [Ann. Appl. Probab. 14 (2004) 1055--1083] by two of the authors. Here, we use the long-time behavior of the solutions of the fluid model established in that paper to derive a property called multiplicative state space collapse, which, loosely speaking, shows that in diffusion scale, the flow count process for the stochastic model can be approximately recovered as a continuous lifting of the workload process.Comment: Published in at http://dx.doi.org/10.1214/08-AAP591 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Store-Forward and its implications for Proportional Scheduling

    Full text link
    The Proportional Scheduler was recently proposed as a scheduling algorithm for multi-hop switch networks. For these networks, the BackPressure scheduler is the classical benchmark. For networks with fixed routing, the Proportional Scheduler is maximum stable, myopic and, furthermore, will alleviate certain scaling issued found in BackPressure for large networks. Nonetheless, the equilibrium and delay properties of the Proportional Scheduler has not been fully characterized. In this article, we postulate on the equilibrium behaviour of the Proportional Scheduler though the analysis of an analogous rule called the Store-Forward allocation. It has been shown that Store-Forward has asymptotically allocates according to the Proportional Scheduler. Further, for Store-Forward networks, numerous equilibrium quantities are explicitly calculable. For FIFO networks under Store-Forward, we calculate the policies stationary distribution and end-to-end route delay. We discuss network topologies when the stationary distribution is product-form, a phenomenon which we call \emph{product form resource pooling}. We extend this product form notion to independent set scheduling on perfect graphs, where we show that non-neighbouring queues are statistically independent. Finally, we analyse the large deviations behaviour of the equilibrium distribution of Store-Forward networks in order to construct Lyapunov functions for FIFO switch networks

    Counter-intuitive throughput behaviors in networks under end-to-end control

    Get PDF
    It has been shown that as long as traffic sources adapt their rates to aggregate congestion measure in their paths, they implicitly maximize certain utility. In this paper we study some counter-intuitive throughput behaviors in such networks, pertaining to whether a fair allocation is always inefficient and whether increasing capacity always raises aggregate throughput. A bandwidth allocation policy can be defined in terms of a class of utility functions parameterized by a scalar a that can be interpreted as a quantitative measure of fairness. An allocation is fair if alpha is large and efficient if aggregate throughput is large. All examples in the literature suggest that a fair allocation is necessarily inefficient. We characterize exactly the tradeoff between fairness and throughput in general networks. The characterization allows us both to produce the first counter-example and trivially explain all the previous supporting examples. Surprisingly, our counter-example has the property that a fairer allocation is always more efficient. In particular it implies that maxmin fairness can achieve a higher throughput than proportional fairness. Intuitively, we might expect that increasing link capacities always raises aggregate throughput. We show that not only can throughput be reduced when some link increases its capacity, more strikingly, it can also be reduced when all links increase their capacities by the same amount. If all links increase their capacities proportionally, however, throughput will indeed increase. These examples demonstrate the intricate interactions among sources in a network setting that are missing in a single-link topology

    Fluid model for a network operating under a fair bandwidth-sharing policy

    Full text link
    We consider a model of Internet congestion control that represents the randomly varying number of flows present in a network where bandwidth is shared fairly between document transfers. We study critical fluid models obtained as formal limits under law of large numbers scalings when the average load on at least one resource is equal to its capacity. We establish convergence to equilibria for fluid models and identify the invariant manifold. The form of the invariant manifold gives insight into the phenomenon of entrainment whereby congestion at some resources may prevent other resources from working at their full capacity

    A New Stable Peer-to-Peer Protocol with Non-persistent Peers

    Full text link
    Recent studies have suggested that the stability of peer-to-peer networks may rely on persistent peers, who dwell on the network after they obtain the entire file. In the absence of such peers, one piece becomes extremely rare in the network, which leads to instability. Technological developments, however, are poised to reduce the incidence of persistent peers, giving rise to a need for a protocol that guarantees stability with non-persistent peers. We propose a novel peer-to-peer protocol, the group suppression protocol, to ensure the stability of peer-to-peer networks under the scenario that all the peers adopt non-persistent behavior. Using a suitable Lyapunov potential function, the group suppression protocol is proven to be stable when the file is broken into two pieces, and detailed experiments demonstrate the stability of the protocol for arbitrary number of pieces. We define and simulate a decentralized version of this protocol for practical applications. Straightforward incorporation of the group suppression protocol into BitTorrent while retaining most of BitTorrent's core mechanisms is also presented. Subsequent simulations show that under certain assumptions, BitTorrent with the official protocol cannot escape from the missing piece syndrome, but BitTorrent with group suppression does.Comment: There are only a couple of minor changes in this version. Simulation tool is specified this time. Some repetitive figures are remove
    corecore