1,410 research outputs found

    Link Buffer Sizing: a New Look at the Old Problem

    Get PDF
    In this paper, we revisit the question of how much buffer an IP router should allocate for its output link. For a long time, the intuitive answer of setting the buffer size to the bitrate-delay product has been widely regarded as reasonable. Recent studies of interaction between queueing at IP routers and TCP congestion control proposed alternative answers. First, we expose and explain contradictions between existing guidelines for link buffer sizing. Then, we argue that the problem of link buffer sizing needs a different formulation. In particular, the chosen buffer size should accommodate not only common versions of TCP but also UDP traffic. Besides, our new formulation of the problem contains an explicit constraint of not engaging IP routers in any additional signaling. We conclude the paper by outlining a promising direction for solving the reformulated problem

    Due-date setting and priority sequencing in a multiclass M/G/1 queue

    Get PDF
    Includes bibliographical references (leaves 26-28).by Lawrence M. Wein

    Predicting the System Performance by Combining Calibrated Performance Models of its Components : A Preliminary Study

    Get PDF
    International audienceIn this paper we consider the problem of combining calibrated performance models of system components in order to predict overall system performance. We focus on open workload system models, in which, under certain conditions, obtaining and validating the overall system performance measures can be a simple application of Little’s law. We discuss the conditions of applicability of such a simple validation methodology, including examples of successful application, as well as examples where this approach fails.Additionally, we propose to analyze the deviations between the model predictions and system measurements, so as to decide if they correspond to “measurement noise” or if an important system component has not been correctly represented. This approach can be used as an aid in the design of validated system performance models

    An approximation approach for the deviation matrix of continuous-time Markov processes with application to Markov decision theory

    Get PDF
    We present an update formula that allows the expression of the deviation matrix of a continuous-time Markov process with denumerable state space having generator matrix Q* through a continuous-time Markov process with generator matrix Q. We show that under suitable stability conditions the algorithm converges at a geometric rate. By applying the concept to three different examples, namely, the M/M/1 queue with vacations, the M/G/1 queue, and a tandem network, we illustrate the broad applicability of our approach. For a problem in admission control, we apply our approximation algorithm toMarkov decision theory for computing the optimal control policy. Numerical examples are presented to highlight the efficiency of the proposed algorithm. © 2010 INFORMS

    Adaptive Replication in Distributed Content Delivery Networks

    Full text link
    We address the problem of content replication in large distributed content delivery networks, composed of a data center assisted by many small servers with limited capabilities and located at the edge of the network. The objective is to optimize the placement of contents on the servers to offload as much as possible the data center. We model the system constituted by the small servers as a loss network, each loss corresponding to a request to the data center. Based on large system / storage behavior, we obtain an asymptotic formula for the optimal replication of contents and propose adaptive schemes related to those encountered in cache networks but reacting here to loss events, and faster algorithms generating virtual events at higher rate while keeping the same target replication. We show through simulations that our adaptive schemes outperform significantly standard replication strategies both in terms of loss rates and adaptation speed.Comment: 10 pages, 5 figure

    Analysis of Error Control and Congestion Control Protocols

    Get PDF
    This thesis presents an analysis of a class of error control and congestion control protocols used in computer networks. We address two kinds of packet errors: (a) independent errors and (b) congestion-dependent errors. Our performance measure is the expected time and the standard deviation of the time to transmit a large message, consisting of N packets. The analysis of error control protocols. Assuming independent packet errors gives an insight on how the error control protocols should really work if buffer overflows are minimal. Some pertinent results on the performance of go-back-n, selective repeat, blast with full retransmission on error (BFRE) and a variant of BFRE, the Optimal BFRE that we propose, are obtained. We then analyze error control protocols in the presence of congestion-dependent errors. We study the selective repeat and go-back-n protocols and find that irrespective of retransmission strategy, the expected time as well as the standard deviation of the time to transmit N packets increases sharply the face of heavy congestion. However, if the congestion level is low, the two retransmission strategies perform similarly. We conclude that congestion control is a far more important issue when errors are caused by congestion. We next study the performance of a queue with dynamically changing input rates that are based on implicit or explicit feedback. This is motivated by recent proposals for adaptive congestion control algorithms where the sender\u27s window size is adjusted based on perceived congestion level of a bottleneck node. We develop a Fokker-Planck approximation for a simplified system; yet it is powerful enough to answer the important questions regarding stability, convergence (or oscillations), fairness and the significant effect that delayed feedback plays on performance. Specifically, we find that, in the absence of feedback delay, a linear increase/exponential decrease rate control algorithm is provably stable and fair. Delayed feedback, however, introduces cyclic behavior. This last result not only concurs with some recent simulation studies, it also expounds quantitatively on the real causes behind them

    Parallel simulation techniques for telecommunication network modelling

    Get PDF
    In this thesis, we consider the application of parallel simulation to the performance modelling of telecommunication networks. A largely automated approach was first explored using a parallelizing compiler to speed up the simulation of simple models of circuit-switched networks. This yielded reasonable results for relatively little effort compared with other approaches. However, more complex simulation models of packet- and cell-based telecommunication networks, requiring the use of discrete event techniques, need an alternative approach. A critical review of parallel discrete event simulation indicated that a distributed model components approach using conservative or optimistic synchronization would be worth exploring. Experiments were therefore conducted using simulation models of queuing networks and Asynchronous Transfer Mode (ATM) networks to explore the potential speed-up possible using this approach. Specifically, it is shown that these techniques can be used successfully to speed-up the execution of useful telecommunication network simulations. A detailed investigation has demonstrated that conservative synchronization performs very well for applications with good look ahead properties and sufficient message traffic density and, given such properties, will significantly outperform optimistic synchronization. Optimistic synchronization, however, gives reasonable speed-up for models with a wider range of such properties and can be optimized for speed-up and memory usage at run time. Thus, it is confirmed as being more generally applicable particularly as model development is somewhat easier than for conservative synchronization. This has to be balanced against the more difficult task of developing and debugging an optimistic synchronization kernel and the application models
    corecore