15 research outputs found

    Performance Optimization and Dynamics Control for Large-scale Data Transfer in Wide-area Networks

    Get PDF
    Transport control plays an important role in the performance of large-scale scientific and media streaming applications involving transfer of large data sets, media streaming, online computational steering, interactive visualization, and remote instrument control. In general, these applications have two distinctive classes of transport requirements: large-scale scientific applications require high bandwidths to move bulk data across wide-area networks, while media streaming applications require stable bandwidths to ensure smooth media playback. Unfortunately, the widely deployed Transmission Control Protocol is inadequate for such tasks due to its performance limitations. The purpose of this dissertation is to conduct rigorous analytical study of the design and performance of transport solutions, and develop an integrated transport solution in a systematical way to overcome the limitations of current transport methods. One of the primary challenges is to explore and compose a set of feasible route options with multiple constraints. Another challenge essentially arises from the randomness inherent in wide-area networks, particularly the Internet. This randomness must be explicitly accounted for to achieve both goodput maximization and stabilization over the constructed routes by suitably adjusting the source rate in response to both network and host dynamics.The superior and robust performance of the proposed transport solution is extensively evaluated in a simulated environment and further verified through real-life implementations and deployments over both Internet and dedicated connections under disparate network conditions in comparison with existing transport methods

    STCP: A New Transport Protocol for High-Speed Networks

    Get PDF
    Transmission Control Protocol (TCP) is the dominant transport protocol today and likely to be adopted in future high‐speed and optical networks. A number of literature works have been done to modify or tune the Additive Increase Multiplicative Decrease (AIMD) principle in TCP to enhance the network performance. In this work, to efficiently take advantage of the available high bandwidth from the high‐speed and optical infrastructures, we propose a Stratified TCP (STCP) employing parallel virtual transmission layers in high‐speed networks. In this technique, the AIMD principle of TCP is modified to make more aggressive and efficient probing of the available link bandwidth, which in turn increases the performance. Simulation results show that STCP offers a considerable improvement in performance when compared with other TCP variants such as the conventional TCP protocol and Layered TCP (LTCP)

    CryptoLight: An Electro-Optical Accelerator for Fully Homomorphic Encryption

    Full text link
    Fully homomorphic encryption (FHE) protects data privacy in cloud computing by enabling computations to directly occur on ciphertexts. Although the speed of computationally expensive FHE operations can be significantly boosted by prior ASIC-based FHE accelerators, the performance of key-switching, the dominate primitive in various FHE operations, is seriously limited by their small bit-width datapaths and frequent matrix transpositions. In this paper, we present an electro-optical (EO) FHE accelerator, CryptoLight, to accelerate FHE operations. Its 512-bit datapath supporting 510-bit residues greatly reduces the key-switching cost. We also create an in-scratchpad-memory transpose unit to fast transpose matrices. Compared to prior FHE accelerators, on average, CryptoLight reduces the latency of various FHE applications by >94.4% and the energy consumption by >95%.Comment: 6 pages, 8 figure

    Agile-SD: A Linux-based TCP Congestion Control Algorithm for Supporting High-speed and Short-distance Networks

    Get PDF
    Recently, high-speed and short-distance networks are widely deployed and their necessity is rapidly increasing everyday. This type of networks is used in several network applications; such as Local Area Networks (LAN) and Data Center Networks (DCN). In LANs and DCNs, high-speed and short-distance networks are commonly deployed to connect between computing and storage elements in order to provide rapid services. Indeed, the overall performance of such networks is significantly influenced by the Congestion Control Algorithm (CCA) which suffers from the problem of bandwidth under-utilization, especially if the applied buffer regime is very small. In this paper, a novel loss-based CCA tailored for high-speed and Short-Distance (SD) networks, namely Agile-SD, has been proposed. The main contribution of the proposed CCA is to implement the mechanism of agility factor. Further, intensive simulation experiments have been carried out to evaluate the performance of Agile-SD compared to Compound and Cubic which are the default CCAs of the most commonly used operating systems. The results of the simulation experiments show that the proposed CCA outperforms the compared CCAs in terms of average throughput, loss ratio and fairness, especially when a small buffer is applied. Moreover, Agile-SD shows lower sensitivity to the buffer size change and packet error rate variation which increases its efficiency.Comment: 12 Page

    Congestion control schemes for single and parallel TCP flows in high bandwidth-delay product networks

    Get PDF
    In this work, we focus on congestion control mechanisms in Transmission Control Protocol (TCP) for emerging very-high bandwidth-delay product networks and suggest several congestion control schemes for parallel and single-flow TCP. Recently, several high-speed TCP proposals have been suggested to overcome the limited throughput achievable by single-flow TCP by modifying its congestion control mechanisms. In the meantime, users overcome the throughput limitations in high bandwidth-delay product networks by using multiple parallel TCP flows, without modifying TCP itself. However, the evident lack of fairness between the high-speed TCP proposals (or parallel TCP) and existing standard TCP has increasingly become an issue. In many scenarios where flows require high throughput, such as grid computing or content distribution networks, often multiple connections go to the same or nearby destinations and tend to share long portions of paths (and bottlenecks). In such cases benefits can be gained by sharing congestion information. To take advantage of this additional information, we first propose a collaborative congestion control scheme for parallel TCP flows. Although the use of parallel TCP flows is an easy and effective way for reliable high-speed data transfer, parallel TCP flows are inherently unfair with respect to single TCP flows. In this thesis we propose, implement, and evaluate a natural extension for aggregated aggressiveness control in parallel TCP flows. To improve the effectiveness of single TCP flows over high bandwidth-delay product networks without causing fairness problems, we suggest a new TCP congestion control scheme that effectively and fairly utilizes high bandwidth-delay product networks by adaptively controlling the flowÂs aggressiveness according to network situations using a competition detection mechanism. We argue that competition detection is more appropriate than congestion detection or bandwidth estimation. We further extend the adaptive aggressiveness control mechanism and the competition detection mechanism from single flows to parallel flows. In this way we achieve adaptive aggregated aggressiveness control. Our evaluations show that the resulting implementation is effective and fair. As a result, we show that single or parallel TCP flows in end-hosts can achieve high performance over emerging high bandwidth-delay product networks without requiring special support from networks or modifications to receivers

    Improved transmission control protocol congestion control technique for high bandwidth long distance networks

    Get PDF
    Transmission Control Protocol (TCP) is responsible for reliable communication of data in high bandwidth long distance networks. TCP is reliable because of its congestion control technique. Many TCP congestion control techniques for different operating systems have been developed previously. TCP Compound and TCP CUBIC are current congestion control techniques being used in Microsoft Windows and Linux operating systems respectively. TCP Reno is Standard TCP congestion control technique. TCP CUBIC does not perform well in high bandwidth long distance networks due to its exponential growth and less reduction in congestion window size. This leads to burst packet losses, unfair allocation of unused link bandwidth, long convergence time, and poor TCP friendliness among competing flows. The aim of this research work is to develop an improved congestion control technique based on TCP CUBIC for high bandwidth long distance networks. This improved technique is based on three components which are Congestion Control Technique for Slow Start (CCT-SS), Congestion Control Technique for Loss Occurrence (CCT-LO), and Enhanced Response Function of TCP CUBIC (ERFC). CCT-SS is proposed which increases the lower boundary limit of congestion window, which in turn, decreases the packet loss rate. CCT-LO is proposed which introduces a new congestion window reduction parameter in order to achieve fairer and quicker allocation of link bandwidth among the competing flows. ERFC is proposed which reduces the average congestion window size of TCP CUBIC in order to improve the TCP friendliness. As a conjunctive result of this research work, an improved congestion control technique is developed by combining the CCT-SS, CCT-LO and ERFC components. Network Simulator 2 is used to evaluate the performance of the proposed congestion control technique and to compare it with the current and other congestion control techniques. Results show that the performance of the proposed congestion control technique outperforms by 8.4% as compared to current congestion control technique

    A simple stability condition for RED using TCP mean-field modeling

    Get PDF
    Congestion on the Internet is an old problem but still a subject of intensive research. The TCP protocol with its AIMD (Additive Increase and Multiplicative Decrease) behavior hides very challenging problems; one of them is to understand the interaction between a large number of users with delayed feedback. This article will focus on two modeling issues of TCP which appeared to be important to tackle concrete scenarios when implementing the model proposed in [Baccelli McDonald Reynier 02] firstly the modeling of the maximum TCP window size: this maximum can be reached quickly in many practical cases; secondly the delay structure: the usual Little-like formula behaves really poorly when queuing delays are variable, and may change dramatically the evolution of the predicted queue size, which makes it useless to study drop-tail or RED (Random Early Detection) mechanisms. Within proposed TCP modeling improvements, we are enabled to look at a concrete example where RED should be used in FIFO routers instead of letting the default drop-tail happen. We study mathematically fixed points of the window size distribution and local stability of RED. An interesting case is when RED operates at the limit when the congestion starts, it avoids unwanted loss of bandwidth and delay variations

    Re-feedback: freedom with accountability for causing congestion in a connectionless internetwork

    Get PDF
    This dissertation concerns adding resource accountability to a simplex internetwork such as the Internet, with only necessary but sufficient constraint on freedom. That is, both freedom for applications to evolve new innovative behaviours while still responding responsibly to congestion; and freedom for network providers to structure their pricing in any way, including flat pricing. The big idea on which the research is built is a novel feedback arrangement termed ‘re-feedback’. A general form is defined, as well as a specific proposal (re-ECN) to alter the Internet protocol so that self-contained datagrams carry a metric of expected downstream congestion. Congestion is chosen because of its central economic role as the marginal cost of network usage. The aim is to ensure Internet resource allocation can be controlled either by local policies or by market selection (or indeed local lack of any control). The current Internet architecture is designed to only reveal path congestion to end-points, not networks. The collective actions of self-interested consumers and providers should drive Internet resource allocations towards maximisation of total social welfare. But without visibility of a cost-metric, network operators are violating the architecture to improve their customer’s experience. The resulting fight against the architecture is destroying the Internet’s simplicity and ability to evolve. Although accountability with freedom is the goal, the focus is the congestion metric, and whether an incentive system is possible that assures its integrity as it is passed between parties around the system, despite proposed attacks motivated by self-interest and malice. This dissertation defines the protocol and canonical examples of accountability mechanisms. Designs are all derived from carefully motivated principles. The resulting system is evaluated by analysis and simulation against the constraints and principles originally set. The mechanisms are proven to be agnostic to specific transport behaviours, but they could not be made flow-ID-oblivious
    corecore