3,657 research outputs found

    A duality model of TCP and queue management algorithms

    Get PDF
    We propose a duality model of end-to-end congestion control and apply it to understanding the equilibrium properties of TCP and active queue management schemes. The basic idea is to regard source rates as primal variables and congestion measures as dual variables, and congestion control as a distributed primal-dual algorithm over the Internet to maximize aggregate utility subject to capacity constraints. The primal iteration is carried out by TCP algorithms such as Reno or Vegas, and the dual iteration is carried out by queue management algorithms such as DropTail, RED or REM. We present these algorithms and their generalizations, derive their utility functions, and study their interaction

    Queue Dynamics With Window Flow Control

    Get PDF
    This paper develops a new model that describes the queueing process of a communication network when data sources use window flow control. The model takes into account the burstiness in sub-round-trip time (RTT) timescales and the instantaneous rate differences of a flow at different links. It is generic and independent of actual source flow control algorithms. Basic properties of the model and its relation to existing work are discussed. In particular, for a general network with multiple links, it is demonstrated that spatial interaction of oscillations allows queue instability to occur even when all flows have the same RTTs and maintain constant windows. The model is used to study the dynamics of delay-based congestion control algorithms. It is found that the ratios of RTTs are critical to the stability of such systems, and previously unknown modes of instability are identified. Packet-level simulations and testbed measurements are provided to verify the model and its predictions

    An Improved Link Model for Window Flow Control and Its Application to FAST TCP

    Get PDF
    This paper presents a link model which captures the queue dynamics in response to a change in a transmission control protocol (TCP) source's congestion window. By considering both self-clocking and the link integrator effect, the model generalizes existing models and is shown to be more accurate by both open loop and closed loop packet level simulations. It reduces to the known static link model when flows' round trip delays are identical, and approximates the standard integrator link model when there is significant cross traffic. We apply this model to the stability analysis of fast active queue management scalable TCP (FAST TCP) including its filter dynamics. Under this model, the FAST control law is linearly stable for a single bottleneck link with an arbitrary distribution of round trip delays. This result resolves the notable discrepancy between empirical observations and previous theoretical predictions. The analysis highlights the critical role of self-clocking in TCP stability, and the proof technique is new and less conservative than existing ones

    The cost of conservative synchronization in parallel discrete event simulations

    Get PDF
    The performance of a synchronous conservative parallel discrete-event simulation protocol is analyzed. The class of simulation models considered is oriented around a physical domain and possesses a limited ability to predict future behavior. A stochastic model is used to show that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approach the complexity of the average per-event overhead of a serial simulation. The method is therefore within a constant factor of optimal. The analysis demonstrates that on large problems--those for which parallel processing is ideally suited--there is often enough parallel workload so that processors are not usually idle. The viability of the method is also demonstrated empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor

    When Backpressure Meets Predictive Scheduling

    Full text link
    Motivated by the increasing popularity of learning and predicting human user behavior in communication and computing systems, in this paper, we investigate the fundamental benefit of predictive scheduling, i.e., predicting and pre-serving arrivals, in controlled queueing systems. Based on a lookahead window prediction model, we first establish a novel equivalence between the predictive queueing system with a \emph{fully-efficient} scheduling scheme and an equivalent queueing system without prediction. This connection allows us to analytically demonstrate that predictive scheduling necessarily improves system delay performance and can drive it to zero with increasing prediction power. We then propose the \textsf{Predictive Backpressure (PBP)} algorithm for achieving optimal utility performance in such predictive systems. \textsf{PBP} efficiently incorporates prediction into stochastic system control and avoids the great complication due to the exponential state space growth in the prediction window size. We show that \textsf{PBP} can achieve a utility performance that is within O(Ï”)O(\epsilon) of the optimal, for any Ï”>0\epsilon>0, while guaranteeing that the system delay distribution is a \emph{shifted-to-the-left} version of that under the original Backpressure algorithm. Hence, the average packet delay under \textsf{PBP} is strictly better than that under Backpressure, and vanishes with increasing prediction window size. This implies that the resulting utility-delay tradeoff with predictive scheduling beats the known optimal [O(Ï”),O(log⁥(1/Ï”))][O(\epsilon), O(\log(1/\epsilon))] tradeoff for systems without prediction

    Resource dimensioning through buffer sampling

    Get PDF
    Link dimensioning, i.e., selecting a (minimal) link capacity such that the users’ performance requirements are met, is a crucial component of network design. It requires insight into the interrelationship among the traffic offered (in terms of the mean offered load , but also its fluctuation around the mean, i.e., ‘burstiness’), the envisioned performance level, and the capacity needed. We first derive, for different performance criteria, theoretical dimensioning formulas that estimate the required capacity cc as a function of the input traffic and the performance target. For the special case of Gaussian input traffic, these formulas reduce to c=M+αVc = M + \alpha V, where directly relates to the performance requirement (as agreed upon in a service level agreement) and VV reflects the burstiness (at the timescale of interest). We also observe that Gaussianity applies for virtually all realistic scenarios; notably, already for a relatively low aggregation level, the Gaussianity assumption is justified.\ud As estimating MM is relatively straightforward, the remaining open issue concerns the estimation of VV. We argue that particularly if corresponds to small time-scales, it may be inaccurate to estimate it directly from the traffic traces. Therefore, we propose an indirect method that samples the buffer content, estimates the buffer content distribution, and ‘inverts’ this to the variance. We validate the inversion through extensive numerical experiments (using a sizeable collection of traffic traces from various representative locations); the resulting estimate of VV is then inserted in the dimensioning formula. These experiments show that both the inversion and the dimensioning formula are remarkably accurate

    A methodological approach to BISDN signalling performance

    Get PDF
    Sophisticated signalling protocols are required to properly handle the complex multimedia, multiparty services supported by the forthcoming BISDN. The implementation feasibility of these protocols should be evaluated during their design phase, so that possible performance bottlenecks are identified and removed. In this paper we present a methodology for evaluating the performance of BISDN signalling systems under design. New performance parameters are introduced and their network-dependent values are extracted through a message flow model which has the capability to describe the impact of call and bearer control separation on the signalling performance. Signalling protocols are modelled through a modular decomposition of the seven OSI layers including the service user to three submodels. The workload model is user descriptive in the sense that it does not approximate the direct input traffic required for evaluating the performance of a layer protocol; instead, through a multi-level approach, it describes the actual implications of user signalling activity for the general signalling traffic. The signalling protocol model is derived from the global functional model of the signalling protocols and information flows using a network of queues incorporating synchronization and dependency functions. The same queueing approach is followed for the signalling transfer network which is used to define processing speed and signalling bandwidth requirements and to identify possible performance bottlenecks stemming from the realization of the related protocols

    Robust control tools for traffic monitoring in TCP/AQM networks

    Full text link
    Several studies have considered control theory tools for traffic control in communication networks, as for example the congestion control issue in IP (Internet Protocol) routers. In this paper, we propose to design a linear observer for time-delay systems to address the traffic monitoring issue in TCP/AQM (Transmission Control Protocol/Active Queue Management) networks. Due to several propagation delays and the queueing delay, the set TCP/AQM is modeled as a multiple delayed system of a particular form. Hence, appropriate robust control tools as quadratic separation are adopted to construct a delay dependent observer for TCP flows estimation. Note that, the developed mechanism enables also the anomaly detection issue for a class of DoS (Denial of Service) attacks. At last, simulations via the network simulator NS-2 and an emulation experiment validate the proposed methodology

    Performance of the IEEE 802.16e sleep mode mechanism in the presence of bidirectional traffic

    Get PDF
    We refine existing performance studies of the WiMAX sleep mode operation to take into account uplink as well as downlink traffic. This as opposed to previous studies which neglected the influence of uplink traffic. We obtain numerically efficient procedures to compute both delay and energy efficiency characteristics. A test scenario with an Individual Subscriber Internet traffic model in both directions shows that even a small amount of uplink traffic has a profound effect on the system performance
    • 

    corecore