10 research outputs found

    High Performance Network Evaluation and Testing

    Get PDF

    Transport Architectures for an Evolving Internet

    Get PDF
    In the Internet architecture, transport protocols are the glue between an application’s needs and the network’s abilities. But as the Internet has evolved over the last 30 years, the implicit assumptions of these protocols have held less and less well. This can cause poor performance on newer networks—cellular networks, datacenters—and makes it challenging to roll out networking technologies that break markedly with the past. Working with collaborators at MIT, I have built two systems that explore an objective-driven, computer-generated approach to protocol design. My thesis is that making protocols a function of stated assumptions and objectives can improve application performance and free network technologies to evolve. Sprout, a transport protocol designed for videoconferencing over cellular networks, uses probabilistic inference to forecast network congestion in advance. On commercial cellular networks, Sprout gives 2-to-4 times the throughput and 7-to-9 times less delay than Skype, Apple Facetime, and Google Hangouts. This work led to Remy, a tool that programmatically generates protocols for an uncertain multi-agent network. Remy’s computer-generated algorithms can achieve higher performance and greater fairness than some sophisticated human-designed schemes, including ones that put intelligence inside the network. The Remy tool can then be used to probe the difficulty of the congestion control problem itself—how easy is it to “learn” a network protocol to achieve desired goals, given a necessarily imperfect model of the networks where it ultimately will be deployed? We found weak evidence of a tradeoff between the breadth of the operating range of a computer-generated protocol and its performance, but also that a single computer-generated protocol was able to outperform existing schemes over a thousand-fold range of link rates

    Queues with Congestion-dependent Feedback

    Get PDF
    This dissertation expands the theory of feedback queueing systems and applies a number of these models to a performance analysis of the Transmission Control Protocol, a flow control protocol commonly used in the Internet

    Performance of the transmission control protocol (TCP) over wireless with quality of service.

    Get PDF
    Thesis (M.Sc.Eng.)-University of Natal, Durban, 2001.The Transmission Control Protocol (TCP) is the most widely used transport protocol in the Internet. TCP is a reliable transport protocol that is tuned to perform well in wired networks where packet losses are mainly due to congestion. Wireless channels are characterized by losses due to transmission errors and handoffs. TCP interprets these losses as congestion and invokes congestion control mechanisms resulting in degradation of performance. TCP is usually layered over the Internet protocol (lP) at the network layer. JP is not reliable and does not provide for any Quality of Service (QoS). The Internet Engineering Task Force (IETF) has provided two techniques for providing QoS in the Internet. These include Integrated Services (lntServ) and Differentiated Services (DiffServ). IntServ provides flow based quality of service and thus it is not scalable on connections with large flows. DiffServ has grown in popularity since it is scalable. A packet in a DiffServ domain is classified into a class of service according to its contract profile and treated differently by its class. To provide end-to-end QoS there is a strong interaction between the transport protocol and the network protocol. In this dissertation we consider the performance of the TCP over a wireless channel. We study whether the current TCP protocols can deliver the desired quality of service faced with the challenges they have on wireless channel. The dissertation discusses the methods of providing for QoS in the Internet. We derive an analytical model for TCP protocol. It is extended to cater for the wireless channel and then further differentiated services. The model is shown to be accurate when compared to simulation. We then conclude by deducing to what degree you can provide the desired QoS with TCP on a wireless channel

    Transport Architectures for an Evolving Internet

    Get PDF
    In the Internet architecture, transport protocols are the glue between an application’s needs and the network’s abilities. But as the Internet has evolved over the last 30 years, the implicit assumptions of these protocols have held less and less well. This can cause poor performance on newer networks—cellular networks, datacenters—and makes it challenging to roll out networking technologies that break markedly with the past. Working with collaborators at MIT, I have built two systems that explore an objective-driven, computer-generated approach to protocol design. My thesis is that making protocols a function of stated assumptions and objectives can improve application performance and free network technologies to evolve. Sprout, a transport protocol designed for videoconferencing over cellular networks, uses probabilistic inference to forecast network congestion in advance. On commercial cellular networks, Sprout gives 2-to-4 times the throughput and 7-to-9 times less delay than Skype, Apple Facetime, and Google Hangouts. This work led to Remy, a tool that programmatically generates protocols for an uncertain multi-agent network. Remy’s computer-generated algorithms can achieve higher performance and greater fairness than some sophisticated human-designed schemes, including ones that put intelligence inside the network. The Remy tool can then be used to probe the difficulty of the congestion control problem itself—how easy is it to “learn” a network protocol to achieve desired goals, given a necessarily imperfect model of the networks where it ultimately will be deployed? We found weak evidence of a tradeoff between the breadth of the operating range of a computer-generated protocol and its performance, but also that a single computer-generated protocol was able to outperform existing schemes over a thousand-fold range of link rates

    Less-than-Best-Effort Service: A Survey of End-to-End Approaches

    Full text link

    MANETs: Internet Connectivity and Transport Protocols

    Get PDF
    A Mobile Ad hoc Network (MANET) is a collection of mobile nodes connected together over a wireless medium, which self-organize into an autonomous multi-hop wireless network. This kind of networks allows people and devices to seamlessly internetwork in areas with no pre-existing communication infrastructure, e.g., disaster recovery environments. Ad hoc networking is not a new concept, having been around in various forms for over 20 years. However, in the past only tactical networks followed the ad hoc networking paradigm. Recently, the introduction of new technologies such as IEEE 802.11, are moved the application field of MANETs to a more commercial field. These evolutions have been generating a renewed and growing interest in the research and development of MANETs. It is widely recognized that a prerequisite for the commercial penetration of the ad hoc networking technologies is the integration with existing wired/wireless infrastructure-based networks to provide an easy and transparent access to the Internet and its services. However, most of the existing solutions for enabling the interconnection between MANETs and the Internet are based on complex and inefficient mechanisms, as Mobile-IP and IP tunnelling. This thesis describes an alternative approach to build multi-hop and heterogeneous proactive ad hoc networks, which can be used as flexible and low-cost extensions of traditional wired LANs. The proposed architecture provides transparent global Internet connectivity and address autocofiguration capabilities to mobile nodes without requiring configuration changes in the pre-existing wired LAN, and relying on basic layer-2 functionalities. This thesis also includes an experimental evaluation of the proposed architecture and a comparison between this architecture with a well-known alternative NAT-based solution. The experimental outcomes confirm that the proposed technique ensures higher per-connection throughputs than the NAT-based solution. This thesis also examines the problems encountered by TCP over multi-hop ad hoc networks. Research on efficient transport protocols for ad hoc networks is one of the most active topics in the MANET community. Such a great interest is basically motivated by numerous observations showing that, in general, TCP is not able to efficiently deal with the unstable and very dynamic environment provided by multi-hop ad hoc networks. This is because some assumptions, in TCP design, are clearly inspired by the characteristics of wired networks dominant at the time when it was conceived. More specifically, TCP implicitly assumes that packet loss is almost always due to congestion phenomena causing buffer overflows at intermediate routers. Furthermore, it also assumes that nodes are static (i.e., they do not change their position over time). Unfortunately, these assumptions do not hold in MANETs, since in this kind of networks packet losses due to interference and link-layer contentions are largely predominant, and nodes may be mobile. The typical approach to solve these problems is patching TCP to fix its inefficiencies while preserving compatibility with the original protocol. This thesis explores a different approach. Specifically, this thesis presents a new transport protocol (TPA) designed from scratch, and address TCP interoperability at a late design stage. In this way, TPA can include all desired features in a neat and coherent way. This thesis also includes an experimental, as well as, a simulative evaluation of TPA, and a comparison between TCP and TPA performance (in terms of throughput, number of unnecessary transmissions and fairness). The presented analysis considers several of possible configurations of the protocols parameters, different routing protocols, and various networking scenarios. In all the cases taken into consideration TPA significantly outperforms TCP

    Control of transport dynamics in overlay networks

    Get PDF
    Transport control is an important factor in the performance of Internet protocols, particularly in the next generation network applications involving computational steering, interactive visualization, instrument control, and transfer of large data sets. The widely deployed Transport Control Protocol is inadequate for these tasks due to its performance drawbacks. The purpose of this dissertation is to conduct a rigorous analytical study on the design and performance of transport protocols, and systematically develop a new class of protocols to overcome the limitations of current methods. Various sources of randomness exist in network performance measurements due to the stochastic nature of network traffic. We propose a new class of transport protocols that explicitly accounts for the randomness based on dynamic stochastic approximation methods. These protocols use congestion window and idle time to dynamically control the source rate to achieve transport objectives. We conduct statistical analyses to determine the main effects of these two control parameters and their interaction effects. The application of stochastic approximation methods enables us to show the analytical stability of the transport protocols and avoid pre-selecting the flow and congestion control parameters. These new protocols are successfully applied to transport control for both goodput stabilization and maximization. The experimental results show the superior performance compared to current methods particularly for Internet applications. To effectively deploy these protocols over the Internet, we develop an overlay network, which resides at the application level to provide data transmission service using User Datagram Protocol. The overlay network, together with the new protocols based on User Datagram Protocol, provides an effective environment for implementing transport control using application-level modules. We also study problems in overlay networks such as path bandwidth estimation and multiple quickest path computation. In wireless networks, most packet losses are caused by physical signal losses and do not necessarily indicate network congestion. Furthermore, the physical link connectivity in ad-hoc networks deployed in unstructured areas is unpredictable. We develop the Connectivity-Through-Time protocols that exploit the node movements to deliver data under dynamic connectivity. We integrate this protocol into overlay networks and present experimental results using network to support a team of mobile robots

    Reactive traffic control mechanisms for communication networks with self-similar bandwidth demands

    Get PDF
    Communication network architectures are in the process of being redesigned so that many different services are integrated within the same network. Due to this integration, traffic management algorithms need to balance the requirements of the traffic which the algorithms are directly controlling with Quality of Service (QoS) requirements of other classes of traffic which will be encountered in the network. Of particular interest is one class of traffic, termed elastic traffic, that responds to dynamic feedback from the network regarding the amount of available resources within the network. Examples of this type of traffic include the Available Bit Rate (ABR) service in Asynchronous Transfer Mode (ATM) networks and connections using Transmission Control Protocol (TCP) in the Internet. Both examples aim to utilise available bandwidth within a network. Reactive traffic management, like that which occurs in the ABR service and TCP, depends explicitly on the dynamic bandwidth requirements of other traffic which is currently using the network. In particular, there is significant evidence that a wide range of network traffic, including Ethernet, World Wide Web, Varible Bit Rate video and signalling traffic, is self-similar. The term self-similar refers to the particular characteristic of network traffic to remain bursty over a wide range of time scales. A closely associated characteristic of self-similar traffic is its long-range dependence (LRD), which refers to the significant correlations that occur with the traffic. By utilising these correlations, greater predictability of network traffic can be achieved, and hence the performance of reactive traffic management algorithms can be enhanced. A predictive rate control algorithm, called PERC (Predictive Explicit Rate Control), is proposed in this thesis which is targeted to the ABR service in ATM networks. By incorporating the LRD stochastic structure of background traffic, measurements of the bandwidth requirements of background traffic, and the delay associated with a particular ABR connection, a predictive algorithm is defined which provides explicit rate information that is conveyed to ABR sources. An enhancement to PERC is also described. This algorithm, called PERC+, uses previous control information to correct prediction errors that occur for connections with larger round-trip delay. These algorithms have been extensively analysed with regards to their network performance, and simulation results show that queue lengths and cell loss rates are significantly reduced when these algorithms are deployed. An adaptive version of PERC has also been developed using real-time parameter estimates of self-similar traffic. This has excellent performance compared with standard ABR rate control algorithms such as ERICA. Since PERC and its enhancement PERC+ have explicitly utilised the index of self-similarity, known as the Hurst parameter, the sensitivity of these algorithms to this parameter can be determined analytically. Research work described in this thesis shows that the algorithms have an asymmetric sensitivity to the Hurst parameter, with significant sensitivity in the region where the parameter is underestimated as being close to 0.5. Simulation results reveal the same bias in the performance of the algorithm with regards to the Hurst parameter. In contrast, PERC is insensitive to estimates of the mean, using the sample mean estimator, and estimates of the traffic variance, which is due to the algorithm primarily utilising the correlation structure of the traffic to predict future bandwidth requirements. Sensitivity analysis falls into the area of investigative research, but it naturally leads to the area of robust control, where algorithms are designed so that uncertainty in traffic parameter estimation or modelling can be accommodated. An alternative robust design approach, to the standard maximum entropy approach, is proposed in this thesis that uses the maximum likelihood function to develop the predictive rate controller. The likelihood function defines the proximity of a specific traffic model to the traffic data, and hence gives a measure of the performance of a chosen model. Maximising the likelihood function leads to optimising robust performance, and it is shown, through simulations, that the system performance is close to the optimal performance as compared with maximising the spectral entropy. There is still debate regarding the influence of LRD on network performance. This thesis also considers the question of the influence of LRD on traffic predictability, and demonstrates that predictive rate control algorithms that only use short-term correlations have close performance to algorithms that utilise long-term correlations. It is noted that predictors based on LRD still out-perform ones which use short-term correlations, but that there is Potential simplification in the design of predictors, since traffic predictability can be achieved using short-term correlations. This thesis forms a substantial contribution to the understanding of control in the case where self-similar processes form part of the overall system. Rather than doggedly pursuing self-similar control, a broader view has been taken where the performance of algorithms have been considered from a number of perspectives. A number of different research avenues lead on from this work, and these are outlined

    Enterprise networks (modern techniques for analysis, measurement and performance improvement)

    Get PDF
    Dans l'évaluation d'Internet au cours des années, un grand nombre d'applications apparaissent, avec différentes exigences de service en termes de bande passante, délai et ainsi de suite. Pourtant, le trafic Internet présente encore une propriété de haute variabilité. Plusieurs études révÚlent que les flux court sont en général liés à des applications interactives-pour ceux-ci, on s'attend à obtenir de bonne performance que l'utilisateur perçoit, le plus souvent en termes de temps de réponse court. Cependant, le schéma classique FIFO/drop-tail déployé des routeurs/commutateurs d'aujourd'hui est bien connu de parti pris contre les flux courts. Pour résoudre ce problÚme sur un réseau best-effort, nous avons proposé un nouveau et simple algorithme d'ordonnancement appelé EFD (Early Flow Discard). Dans ce manuscrit, nous avons d'abord évaluer la performance d'EFD dans un réseau cùblé avec un seul goulot d'étranglement au moyen d'étendu simulations. Nous discutons aussi des variantes possibles de EFD et les adaptations de EFD à 802.11 WLAN - se réfÚrent principalement à EFDACK et PEFD, qui enregistre les volumes échangés dans deux directions ou compte simplement les paquets dans une direction, visant à améliorer l'équité à niveau flot et l'interactivité dans les WLANs. Enfin, nous nous consacrons à profiler le trafic de l'entreprise, en plus de elaborer deux modÚles de trafic-l'une qui considÚre la structure topologique de l'entreprise et l'autre qui intÚgre l'impact des applications au-dessus de TCP - pour aider à évaluer et à comparer les performances des politiques d'ordonnancement dans les réseaux d'entreprise classiques.As the Internet evolves over the years, a large number of applications emerge with varying service requirements in terms of bandwidth, delay, loss rate and so on. Still, the Internet traffic exhibits a high variability property the majority of the flows are of small sizes while a small percentage of very long flows contribute to a large portion of the traffic volume. Several studies reveal that small flows are in general related to interactive applications for which one expects to obtain good user perceived performance, most often in terms of short response time. However, the classical FIFO/drop-tail scheme deployed in today s routers/switches is well known to bias against short flows over long ones. To tackle this issue over a best-effort network, we have proposed a novel and simple scheduling algorithm named EFD (Early Flow Discard). In this manuscript, we first evaluate the performance of EFD in a single-bottleneck wired network through extensive simulations. We then discuss the possible variants of EFD and EFD s adaptations to 802.11 WLANs mainly refer to EFDACK and PEFD. Finally, we devote ourselves to profiling enterprise traffic, and further devise two workload models - one that takes into account the enterprise topological structure and the other that incorporates the impact of the applications on top of TCP - to help to evaluate and compare the performance of scheduling policies in typical enterprise networks.PARIS-Télécom ParisTech (751132302) / SudocSudocFranceF
    corecore