13 research outputs found

    Highly Scalable, UDP-Based Network Transport Protocols for Lambda Grids and 10 GE Routed Networks

    Full text link

    Control of transport dynamics in overlay networks

    Get PDF
    Transport control is an important factor in the performance of Internet protocols, particularly in the next generation network applications involving computational steering, interactive visualization, instrument control, and transfer of large data sets. The widely deployed Transport Control Protocol is inadequate for these tasks due to its performance drawbacks. The purpose of this dissertation is to conduct a rigorous analytical study on the design and performance of transport protocols, and systematically develop a new class of protocols to overcome the limitations of current methods. Various sources of randomness exist in network performance measurements due to the stochastic nature of network traffic. We propose a new class of transport protocols that explicitly accounts for the randomness based on dynamic stochastic approximation methods. These protocols use congestion window and idle time to dynamically control the source rate to achieve transport objectives. We conduct statistical analyses to determine the main effects of these two control parameters and their interaction effects. The application of stochastic approximation methods enables us to show the analytical stability of the transport protocols and avoid pre-selecting the flow and congestion control parameters. These new protocols are successfully applied to transport control for both goodput stabilization and maximization. The experimental results show the superior performance compared to current methods particularly for Internet applications. To effectively deploy these protocols over the Internet, we develop an overlay network, which resides at the application level to provide data transmission service using User Datagram Protocol. The overlay network, together with the new protocols based on User Datagram Protocol, provides an effective environment for implementing transport control using application-level modules. We also study problems in overlay networks such as path bandwidth estimation and multiple quickest path computation. In wireless networks, most packet losses are caused by physical signal losses and do not necessarily indicate network congestion. Furthermore, the physical link connectivity in ad-hoc networks deployed in unstructured areas is unpredictable. We develop the Connectivity-Through-Time protocols that exploit the node movements to deliver data under dynamic connectivity. We integrate this protocol into overlay networks and present experimental results using network to support a team of mobile robots

    Consistent high performance and flexible congestion control architecture

    Get PDF
    The part of TCP software stack that controls how fast a data sender transfers packets is usually referred as congestion control, because it was originally introduced to avoid network congestion of multiple competing flows. During the recent 30 years of Internet evolution, traditional TCP congestion control architecture, though having a army of specially-engineered implementations and improvements over the original software, suffers increasingly more from surprisingly poor performance in today's complicated network conditions. We argue the traditional TCP congestion control family has little hope of achieving consistent high performance due to a fundamental architectural deficiency: hardwiring packet-level events to control responses. In this thesis, we propose Performance-oriented Congestion Control (PCC), a new congestion control architecture in which each sender continuously observes the connection between its rate control actions and empirically experienced performance, enabling it to use intelligent control algorithms to consistently adopt actions that result in high performance. We first build the above foundation of PCC architecture analytically prove the viability of this new congestion control architecture. Specifically, we show that, controversial to intuition, with certain form of utility function and a theoretically simplified rate control algorithm, selfishly competing senders converge to a fair and stable Nash Equilibrium. With this architectural and theoretical guideline, we then design and implement the first congestion control protocol in PCC family: PCC Allegro. PCC Allegro immediate demonstrates its architectural benefits with significant, often more than 10X, performance gain on a wide spectrum of challenging network conditions. With these very encouraging performance validation, we further advance PCC's architecture on both utilty function framework and the learning rate control algorithm. Taking a principled approach using online learning theory, we designed PCC Vivace with a new strictly socially concave utility function framework and a gradient-ascend based learning rate control algorithm. PCC Vivace significantly improves performance on fast-changing networks, yields better tradeoff in convergence speed and stability and better TCP friendliness comparing to PCC Allegro and other state-of-art new congestion control protocols. Moreover, PCC Vivace's expressive utility function framework can be tuned differently at different competing flows to produce predictable converged throughput ratios for each flow. This opens significant future potential for PCC Vivace in centrally control networking paradigm like Software Defined Networks (SDN). Finally, with all these research advances, we aim to push PCC architecture to production use with a a user-space tunneling proxy and successfully integration with Google's QUIC transport framework

    An integrated transport solution to big data movement in high-performance networks

    Get PDF
    Extreme-scale e-Science applications in various domains such as earth science and high energy physics among multiple national institutions within the U.S. are generating colossal amounts of data, now frequently termed as “big data”. The big data must be stored, managed and moved to different geographical locations for distributed data processing and analysis. Such big data transfers require stable and high-speed network connections, which are not readily available in traditional shared IP networks such as the Internet. High-performance networking technologies and services featuring high bandwidth and advance reservation are being rapidly developed and deployed across the nation and around the globe to support such scientific applications. However, these networking technologies and services have not been fully utilized, mainly because: i) the use of these technologies and services often requires considerable domain knowledge and many application users are even not aware of their existence; and ii) the end-to-end data transfer performance largely depends on the transport protocol being used on the end hosts. The high-speed network path with reserved bandwidth in High-performance Networks has shifted the data transfer bottleneck from network segments in traditional IP networks to end hosts, which most existing transport protocols are not well suited to handle. In this dissertation, an integrated transport solution is proposed in support of data- and network-intensive applications in various science domains. This solution integrates three major components, i.e., i) transport-support workflow optimization, ii) transport profile generation, and iii) transport protocol design, into a unified framework. Firstly, a class of transport-support workflow optimization problems are formulated, where an appropriate set of resources and services are selected to compose the best transport-support workflow to meet user’s data transfer request in terms of various performance requirements. Secondly, a transport profiler named Transport Profile Generator (TPG) and its extended and accelerated version named FastProf are designed and implemented to characterize and enhance the end-to-end data transfer performance of a selected transport method over an established network path. Finally, several approaches based on rate and error threshold control are proposed to design a suite of data transfer protocols specifically tailored for big data transfer over dedicated connections. The proposed integrated transport solution is implemented and evaluated in: i) a local testbed with a single 10 Gb/s back-to-back connection and dual 10 Gb/s NIC-to-NIC connections; and ii) several wide-area networks with 10 Gb/s long-haul connections at collaborative sites including Oak Ridge National Laboratory, Argonne National Laboratory, and University of Chicago

    Service-oriented models for audiovisual content storage

    No full text
    What are the important topics to understand if involved with storage services to hold digital audiovisual content? This report takes a look at how content is created and moves into and out of storage; the storage service value networks and architectures found now and expected in the future; what sort of data transfer is expected to and from an audiovisual archive; what transfer protocols to use; and a summary of security and interface issues

    Hyperscsi : Design and development of a new protocol for storage networking

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Performance Optimization and Dynamics Control for Large-scale Data Transfer in Wide-area Networks

    Get PDF
    Transport control plays an important role in the performance of large-scale scientific and media streaming applications involving transfer of large data sets, media streaming, online computational steering, interactive visualization, and remote instrument control. In general, these applications have two distinctive classes of transport requirements: large-scale scientific applications require high bandwidths to move bulk data across wide-area networks, while media streaming applications require stable bandwidths to ensure smooth media playback. Unfortunately, the widely deployed Transmission Control Protocol is inadequate for such tasks due to its performance limitations. The purpose of this dissertation is to conduct rigorous analytical study of the design and performance of transport solutions, and develop an integrated transport solution in a systematical way to overcome the limitations of current transport methods. One of the primary challenges is to explore and compose a set of feasible route options with multiple constraints. Another challenge essentially arises from the randomness inherent in wide-area networks, particularly the Internet. This randomness must be explicitly accounted for to achieve both goodput maximization and stabilization over the constructed routes by suitably adjusting the source rate in response to both network and host dynamics.The superior and robust performance of the proposed transport solution is extensively evaluated in a simulated environment and further verified through real-life implementations and deployments over both Internet and dedicated connections under disparate network conditions in comparison with existing transport methods

    Transport Control Protocol (TCP) over Optical Burst Switched Networks

    Get PDF
    Transport Control Protocol (TCP) is the dominant protocol in modern communication networks, in which the issues of reliability, flow, and congestion control must be handled efficiently. This thesis studies the impact of the next-generation bufferless optical burst-switched (OBS) networks on the performance of TCP congestion-control implementations (i.e., dropping-based, explicit-notification-based, and delay-based). The burst contention phenomenon caused by the buffer-less nature of OBS occurs randomly and has a negative impact on dropping-based TCP since it causes a false indication of network congestion that leads to improper reaction on a burst drop event. In this thesis we study the impact of these random burst losses on dropping-based TCP throughput. We introduce a novel congestion control scheme for TCP over OBS networks, called Statistical Additive Increase Multiplicative Decrease (SAIMD). SAIMD maintains and analyzes a number of previous round trip times (RTTs) at the TCP senders in order to identify the confidence with which a packet-loss event is due to network congestion. The confidence is derived by positioning short-term RTT in the spectrum of long-term historical RTTs. The derived confidence corresponding to the packet loss is then taken in to account by the policy developed for TCP congestion-window adjustment. For explicit-notification TCP, we propose a new TCP implementation over OBS networks, called TCP with Explicit Burst Loss Contention Notification (TCP-BCL). We examine the throughput performance of a number of representative TCP implementations over OBS networks, and analyze the TCP performance degradation due to the misinterpretation of timeout and packet-loss events. We also demonstrate that the proposed TCP-BCL scheme can counter the negative effect of OBS burst losses and is superior to conventional TCP architectures in OBS networks. For delay-based TCP, we observe that this type of TCP implementation cannot detect network congestion when deployed over typical OBS networks since RTT fluctuations are minor. Also, delay-based TCP can suffer from falsely detecting network congestion when the underlying OBS network provides burst retransmission and/or deflection. Due to the fact that burst retransmission and deflection schemes introduce additional delays for bursts that are retransmitted or deflected, TCP cannot determine whether this sudden delay is due to network congestion or simply to burst recovery at the OBS layer. In this thesis we study the behaviour of delay-based TCP Vegas over OBS networks, and propose a version of threshold-based TCP Vegas that is suitable for the characteristics of OBS networks. The threshold-based TCP Vegas is able to distinguish increases in packet delay due to network congestion from burst contention at low traffic loads. The evolution of OBS technology is highly coupled with its ability to support upper-layer applications. Without fully understanding the burst transmission behaviour and the associated impact on the TCP congestion-control mechanism, it will be difficult to exploit the advantages of OBS networks fully

    Evaluation of explicit congestion control for high-speed networks

    Get PDF
    Recently, there has been a significant surge of interest towards the design and development of a new global-scale communication network that can overcome the limitations of the current Internet. Among the numerous directions of improvement in networking technology, recent pursuit to do better flow control of network traffic has led to the emergence of several explicit-feedback congestion control methods. As a first step towards understanding these methods, we analyze the stability and transient performance of Rate Control Protocol (RCP).We find that RCP can become unstable in certain topologies and may exhibit very high buffering requirements at routers. To address these limitations, we propose a new controller called Proportional Integral Queue Independent RCP (PIQI-RCP), prove its stability under heterogeneous delay, and use simulations to show that the new method has significantly lower transient queue lengths, better transient dynamics, and tractable stability properties. As a second step in understanding explicit congestion control, we experimentally evaluate proposed methods such as XCP, JetMax, RCP, and PIQI-RCP using their Linux implementation developed by us. Our experiments show that these protocols are scalable with the increase in link capacity and round-trip propagation delay. In steady-state, they have low queuing delay and almost zero packet-loss rate. We confirm that XCP cannot achieve max-min fairness in certain topologies. We find that JetMax significantly drops link utilization in the presence of short flows with long flow and RCP requires large buffer size at bottleneck routers to prevent transient packet losses and is slower in convergence to steady-state as compared to other methods. We observe that PIQI-RCP performs better than RCP in most of the experiments

    Dimensionerings- en werkverdelingsalgoritmen voor lambda grids

    Get PDF
    Grids bestaan uit een verzameling reken- en opslagelementen die geografisch verspreid kunnen zijn, maar waarvan men de gezamenlijke capaciteit wenst te benutten. Daartoe dienen deze elementen verbonden te worden met een netwerk. Vermits veel wetenschappelijke applicaties gebruik maken van een Grid, en deze applicaties doorgaans grote hoeveelheden data verwerken, is het noodzakelijk om een netwerk te voorzien dat dergelijke grote datastromen op betrouwbare wijze kan transporteren. Optische transportnetwerken lenen zich hier uitstekend toe. Grids die gebruik maken van dergelijk netwerk noemt men lambda Grids. Deze thesis beschrijft een kader waarin het ontwerp en dimensionering van optische netwerken voor lambda Grids kunnen beschreven worden. Ook wordt besproken hoe werklast kan verdeeld worden op een Grid eens die gedimensioneerd is. Een groot deel van de resultaten werd bekomen door simulatie, waarbij gebruik gemaakt wordt van een eigen Grid simulatiepakket dat precies focust op netwerk- en Gridelementen. Het ontwerp van deze simulator, en de daarbijhorende implementatiekeuzes worden dan ook uitvoerig toegelicht in dit werk
    corecore