1,654 research outputs found

    Long range dependence in network traffic and the closed loop behaviour of buffers under adaptive window control

    Get PDF
    We consider an Internet link carrying http-like traffic, i.e., transfers of finite volume files arriving at random time instants. These file transfers are controlled by an adaptive window protocol (AWP); an example of such a protocol is TCP. We provide analysis for the auto-covariance function of the AWP-controlled traffic into the link's buffer; this traffic, in general, cannot be represented by an on-off process. The analysis establishes that, for TCP-controlled transfer of Pareto-distributed file sizes with infinite second moment, the traffic into the link buffer is long range-dependent (LRD). We also develop an analysis for obtaining the stationary distribution of the link buffer occupancy under an AWP-controlled transfer of files sampled from some distribution. For any AWP, the analysis provides us with the Laplace-Stieltjes transform (LST) of the distribution of the link buffer occupancy process in terms of the functions defining the AWP and the file size distribution. The analysis also provides a necessary and a sufficient condition for the finiteness of the mean link buffer content; these conditions again have explicit dependence on the AWP used and the file size distribution. This establishes the sensitivity of the buffer occupancy process to the file size distribution. Combining the results from the above analyses, we provide various examples in which the closed loop control of an AWP results in finite mean link buffer occupancy even though the file sizes are Pareto-distributed (with infinite second moment), and the traffic into the link buffer is long range-dependent (with Hurst parameters which would suggest an infinite mean queue occupancy under open loop analysis). We also study the effect of window reductions due to active queue management and find that window reductions lead to further lightening of the tail of buffer occupancy distribution. The significance of this work is three-fold: (i) by looking at the window evolution as a function of the amount of data served and not as a function of time, this work provides a new framework for analysing various processes related to the link buffer under AWP-controlled transfer of files with a general file size distribution; (ii) it indicates that the buffer behaviour in the Internet may not be as poor as predicted from an open loop analysis of a queue fed with LRD traffic; and (iii) it shows that the buffer behaviour (and hence the throughput performance for finite buffers) is sensitive to the distribution of file sizes

    A critical look at power law modelling of the Internet

    Get PDF
    This paper takes a critical look at the usefulness of power law models of the Internet. The twin focuses of the paper are Internet traffic and topology generation. The aim of the paper is twofold. Firstly it summarises the state of the art in power law modelling particularly giving attention to existing open research questions. Secondly it provides insight into the failings of such models and where progress needs to be made for power law research to feed through to actual improvements in network performance.Comment: To appear Computer Communication

    An edge-queued datagram service for all datacenter traffic

    Get PDF
    Modern datacenters support a wide range of protocols and in-network switch enhancements aimed at improving performance. Unfortunately, the resulting protocols often do not coexist gracefully because they inevitably interact via queuing in the network. In this paper we describe EQDS, a new datagram service for datacenters that moves almost all of the queuing out of the core network and into the sending host. This enables it to support multiple (conflicting) higher layer protocols, while only sending packets into the network according to any receiver-driven credit scheme. EQDS can transparently speed up legacy TCP and RDMA stacks, and enables transport protocol evolution, while benefiting from future switch enhancements without needing to modify higher layer stacks. We show through simulation and multiple implementations that EQDS can reduce FCT of legacy TCP by 2x, improve the NVMeOF-RDMA throughput by 30%, and safely run TCP alongside RDMA on the same network

    TCP performance over end-to-end rate control and stochastic available capacity

    Get PDF
    Motivated by TCP over end-to-end ABR, we study the performance of adaptive window congestion control, when it operates over an explicit feedback rate-control mechanism, in a situation in which the bandwidth available to the elastic traffic is stochastically time varying. It is assumed that the sender and receiver of the adaptive window protocol are colocated with the rate-control endpoints. The objective of the study is to understand if the interaction of the rate-control loop and the window-control loop is beneficial for end-to-end throughput, and how the parameters of the problem (propagation delay, bottleneck buffers, and rate of variation of the available bottleneck bandwidth) affect the performance.The available bottleneck bandwidth is modeled as a two-state Markov chain. We develop an analysis that explicitly models the bottleneck buffers, the delayed explicit rate feedback, and TCP's adaptive window mechanism. The analysis, however, applies only when the variations in the available bandwidth occur over periods larger than the round-trip delay. For fast variations of the bottleneck bandwidth, we provide results from a simulation on a TCP testbed that uses Linux TCP code, and a simulation/emulation of the network model inside the Linux kernel.We find that, over end-to-end ABR, the performance of TCP improves significantly if the network bottleneck bandwidth variations are slow as compared to the round-trip propagation delay. Further, we find that TCP over ABR is relatively insensitive to bottleneck buffer size. These results are for a short-term average link capacity feedback at the ABR level (INSTCAP). We use the testbed to study EFFCAP feedback, which is motivated by the notion of the effective capacity of the bottleneck link. We find that EFFCAP feedback is adaptive to the rate of bandwidth variations at the bottleneck link, and thus yields good performance (as compared to INSTCAP) over a wide range of the rate of bottleneck bandwidth variation. Finally, we study if TCP over ABR, with EFFCAP feedback, provides throughput fairness even if the connections have different round-trip propagation delays

    Performance issues in optical burst/packet switching

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-01524-3_8This chapter summarises the activities on optical packet switching (OPS) and optical burst switching (OBS) carried out by the COST 291 partners in the last 4 years. It consists of an introduction, five sections with contributions on five different specific topics, and a final section dedicated to the conclusions. Each section contains an introductive state-of-the-art description of the specific topic and at least one contribution on that topic. The conclusions give some points on the current situation of the OPS/OBS paradigms

    Dynamic bandwidth allocation in ATM networks

    Get PDF
    Includes bibliographical references.This thesis investigates bandwidth allocation methodologies to transport new emerging bursty traffic types in ATM networks. However, existing ATM traffic management solutions are not readily able to handle the inevitable problem of congestion as result of the bursty traffic from the new emerging services. This research basically addresses bandwidth allocation issues for bursty traffic by proposing and exploring the concept of dynamic bandwidth allocation and comparing it to the traditional static bandwidth allocation schemes

    Traffic Profiles and Performance Modelling of Heterogeneous Networks

    Get PDF
    This thesis considers the analysis and study of short and long-term traffic patterns of heterogeneous networks. A large number of traffic profiles from different locations and network environments have been determined. The result of the analysis of these patterns has led to a new parameter, namely the 'application signature'. It was found that these signatures manifest themselves in various granularities over time, and are usually unique to an application, permanent virtual circuit (PVC), user or service. The differentiation of the application signatures into different categories creates a foundation for short and long-term management of networks. The thesis therefore looks from the micro and macro perspective on traffic management, covering both aspects. The long-term traffic patterns have been used to develop a novel methodology for network planning and design. As the size and complexity of interconnected systems grow steadily, usually covering different time zones, geographical and political areas, a new methodology has been developed as part of this thesis. A part of the methodology is a new overbooking mechanism, which stands in contrast to existing overbooking methods created by companies like Bell Labs. The new overbooking provides companies with cheaper network design and higher average throughput. In addition, new requirements like risk factors have been incorporated into the methodology, which lay historically outside the design process. A large network service provider has implemented the overbooking mechanism into their network planning process, enabling practical evaluation. The other aspect of the thesis looks at short-term traffic patterns, to analyse how congestion can be controlled. Reoccurring short-term traffic patterns, the application signatures, have been used for this research to develop the "packet train model" further. Through this research a new congestion control mechanism was created to investigate how the application signatures and the "extended packet train model" could be used. To validate the results, a software simulation has been written that executes the proprietary congestion mechanism and the new mechanism for comparison. Application signatures for the TCP/IP protocols have been applied in the simulation and the results are displayed and discussed in the thesis. The findings show the effects that frame relay congestion control mechanisms have on TCP/IP, where the re-sending of segments, buffer allocation, delay and throughput are compared. The results prove that application signatures can be used effectively to enhance existing congestion control mechanisms.AT&T (UK) Ltd, Englan

    Reactive traffic control mechanisms for communication networks with self-similar bandwidth demands

    Get PDF
    Communication network architectures are in the process of being redesigned so that many different services are integrated within the same network. Due to this integration, traffic management algorithms need to balance the requirements of the traffic which the algorithms are directly controlling with Quality of Service (QoS) requirements of other classes of traffic which will be encountered in the network. Of particular interest is one class of traffic, termed elastic traffic, that responds to dynamic feedback from the network regarding the amount of available resources within the network. Examples of this type of traffic include the Available Bit Rate (ABR) service in Asynchronous Transfer Mode (ATM) networks and connections using Transmission Control Protocol (TCP) in the Internet. Both examples aim to utilise available bandwidth within a network. Reactive traffic management, like that which occurs in the ABR service and TCP, depends explicitly on the dynamic bandwidth requirements of other traffic which is currently using the network. In particular, there is significant evidence that a wide range of network traffic, including Ethernet, World Wide Web, Varible Bit Rate video and signalling traffic, is self-similar. The term self-similar refers to the particular characteristic of network traffic to remain bursty over a wide range of time scales. A closely associated characteristic of self-similar traffic is its long-range dependence (LRD), which refers to the significant correlations that occur with the traffic. By utilising these correlations, greater predictability of network traffic can be achieved, and hence the performance of reactive traffic management algorithms can be enhanced. A predictive rate control algorithm, called PERC (Predictive Explicit Rate Control), is proposed in this thesis which is targeted to the ABR service in ATM networks. By incorporating the LRD stochastic structure of background traffic, measurements of the bandwidth requirements of background traffic, and the delay associated with a particular ABR connection, a predictive algorithm is defined which provides explicit rate information that is conveyed to ABR sources. An enhancement to PERC is also described. This algorithm, called PERC+, uses previous control information to correct prediction errors that occur for connections with larger round-trip delay. These algorithms have been extensively analysed with regards to their network performance, and simulation results show that queue lengths and cell loss rates are significantly reduced when these algorithms are deployed. An adaptive version of PERC has also been developed using real-time parameter estimates of self-similar traffic. This has excellent performance compared with standard ABR rate control algorithms such as ERICA. Since PERC and its enhancement PERC+ have explicitly utilised the index of self-similarity, known as the Hurst parameter, the sensitivity of these algorithms to this parameter can be determined analytically. Research work described in this thesis shows that the algorithms have an asymmetric sensitivity to the Hurst parameter, with significant sensitivity in the region where the parameter is underestimated as being close to 0.5. Simulation results reveal the same bias in the performance of the algorithm with regards to the Hurst parameter. In contrast, PERC is insensitive to estimates of the mean, using the sample mean estimator, and estimates of the traffic variance, which is due to the algorithm primarily utilising the correlation structure of the traffic to predict future bandwidth requirements. Sensitivity analysis falls into the area of investigative research, but it naturally leads to the area of robust control, where algorithms are designed so that uncertainty in traffic parameter estimation or modelling can be accommodated. An alternative robust design approach, to the standard maximum entropy approach, is proposed in this thesis that uses the maximum likelihood function to develop the predictive rate controller. The likelihood function defines the proximity of a specific traffic model to the traffic data, and hence gives a measure of the performance of a chosen model. Maximising the likelihood function leads to optimising robust performance, and it is shown, through simulations, that the system performance is close to the optimal performance as compared with maximising the spectral entropy. There is still debate regarding the influence of LRD on network performance. This thesis also considers the question of the influence of LRD on traffic predictability, and demonstrates that predictive rate control algorithms that only use short-term correlations have close performance to algorithms that utilise long-term correlations. It is noted that predictors based on LRD still out-perform ones which use short-term correlations, but that there is Potential simplification in the design of predictors, since traffic predictability can be achieved using short-term correlations. This thesis forms a substantial contribution to the understanding of control in the case where self-similar processes form part of the overall system. Rather than doggedly pursuing self-similar control, a broader view has been taken where the performance of algorithms have been considered from a number of perspectives. A number of different research avenues lead on from this work, and these are outlined

    Investigating the Effects of Network Dynamics on Quality of Delivery Prediction and Monitoring for Video Delivery Networks

    Get PDF
    Video streaming over the Internet requires an optimized delivery system given the advances in network architecture, for example, Software Defined Networks. Machine Learning (ML) models have been deployed in an attempt to predict the quality of the video streams. Some of these efforts have considered the prediction of Quality of Delivery (QoD) metrics of the video stream in an effort to measure the quality of the video stream from the network perspective. In most cases, these models have either treated the ML algorithms as black-boxes or failed to capture the network dynamics of the associated video streams. This PhD investigates the effects of network dynamics in QoD prediction using ML techniques. The hypothesis that this thesis investigates is that ML techniques that model the underlying network dynamics achieve accurate QoD and video quality predictions and measurements. The thesis results demonstrate that the proposed techniques offer performance gains over approaches that fail to consider network dynamics. This thesis results highlight that adopting the correct model by modelling the dynamics of the network infrastructure is crucial to the accuracy of the ML predictions. These results are significant as they demonstrate that improved performance is achieved at no additional computational or storage cost. These techniques can help the network manager, data center operatives and video service providers take proactive and corrective actions for improved network efficiency and effectiveness
    corecore