16 research outputs found

    SSthreshless Start: A Sender-Side TCP Intelligence for Long Fat Network

    Full text link
    Measurement shows that 85% of TCP flows in the internet are short-lived flows that stay most of their operation in the TCP startup phase. However, many previous studies indicate that the traditional TCP Slow Start algorithm does not perform well, especially in long fat networks. Two obvious problems are known to impact the Slow Start performance, which are the blind initial setting of the Slow Start threshold and the aggressive increase of the probing rate during the startup phase regardless of the buffer sizes along the path. Current efforts focusing on tuning the Slow Start threshold and/or probing rate during the startup phase have not been considered very effective, which has prompted an investigation with a different approach. In this paper, we present a novel TCP startup method, called threshold-less slow start or SSthreshless Start, which does not need the Slow Start threshold to operate. Instead, SSthreshless Start uses the backlog status at bottleneck buffer to adaptively adjust probing rate which allows better seizing of the available bandwidth. Comparing to the traditional and other major modified startup methods, our simulation results show that SSthreshless Start achieves significant performance improvement during the startup phase. Moreover, SSthreshless Start scales well with a wide range of buffer size, propagation delay and network bandwidth. Besides, it shows excellent friendliness when operating simultaneously with the currently popular TCP NewReno connections.Comment: 25 pages, 10 figures, 7 table

    The Effects of Parallel Processing on Update Response Time in Distributed Database Design

    Get PDF
    Network latency and local update are the most significant components of update response time in a distributed database system. Effectively designed distributed database systems can take advantage of parallel processing to minimize this time. We present a design approach to response time minimization for update transactions in a distributed database. Response time is calculated as the sum of local processing and communication, including transmit time, queuing delays, and network latency. We demonstrate that parallelism has significant impacts on the efficiency of data allocation strategies in the design of high transaction-volume distributed databases

    Minimum-Latency Transport Protocols with Modulo-N Incarnation Numbers

    Get PDF
    To provide reliable connection management, a transport protocol uses 3-way handshakes in which user incarnations are identified by bounded incarnation numbers from some modulo-NN space. Cacheing schemes have been proposed to reduce the 3-way handshake to a 2-way handshake, providing the minimum latency desired for transaction-oriented applications. In this paper, we define a class of cacheing protocols and determine the minimum NN and optimal cache residency time as a function of real-time constraints (e.g.\ message lifetime, incarnation creation rate, inactivity duration, etc.). The protocols use the client-server architecture and handle failures and recoveries. Both clients and servers generate incarnation numbers from a local counter (e.g.\ clock). These protocols assume a maximum duration for each incarnation; without this assumption, there is a very small probability (1N2\approx \frac{1}{N^2}) of misinterpretation of incarnation numbers. This restriction can be overcome with some additional cacheing. (Also cross-referenced as UMIACS-TR-93-24.1

    A simulation study on HTTP performance analysis in terms of its interaction with TCP

    Get PDF
    Ankara : Department of Electrical and Electronics Engineering and Institute of Engineering and Science, Bilkent Univ., 1998.Thesis (Master's) -- Bilkent University, 1998.Includes bibliographical references leaves 47-49In this thesis, we have performed a. simulation study on performance analysis of HTTP (HyperTrext Transfer Protocol) in terms of its interaction with TCP ('Transmission Control Protocol). The latency through internet connections can he reduced by making modicifations on the application and transport layer protocols. For the simulations, we have built models of HTTP/l.O and HTTP/1.1 using a simulation package Network Simula.- tor. Pour different connection mechanisms have been realized. They are serial, parallel, pipelined and segment-filled connections. Serial and parallel connections are simulated for comparison purposes. These are connection mechanisms of HTTP/I.O. The modification proposed in in HTTP /1.1 is pipelined connection. VVe have obtained segment-filled connection by modifying pipelined case. We have examined the performance of each modification and compared their simulation results with HTTP/I.O connections. For the traffic conditions used in the simulations, segment-filled and pipelined connections performed better in terms of effective web page retrieval rate. In addition, a,s a modification to the TCP, we have increased the initial window size and compared vvith the one segment initial window size case. Changing initial window size' from I l.o 2 and .‘1 has increased the performance of each connection case individually.Gürkan, DenizM.S

    Improving the Performance of Internet Data Transport

    Get PDF
    With the explosion of the World Wide Web, the Internet infrastructure faces new challenges in providing high performance for data traffic. First, it must be able to pro-vide a fair-share of congested link bandwidth to every flow. Second, since web traffic is inherently interactive, it must minimize the delay for data transfer. Recent studies have shown that queue management algorithms such as Tail Drop, RED and Blue are deficient in providing high throughput, low delay paths for a data flow. Two major shortcomings of the current algorithms are: they allow TCP flows to get synchronized and thus require large buffers during congestion to enable high throughput; and they allow unfair bandwidth usage for shorter round-trip time TCP flows. We propose algorithms using multiple queues and discard policies with hysteresis at bottleneck routers to address both these issues. Us-ing ns-2 simulations, we show that these algorithms can significantly outperform RED and Blue, especially at smaller buffer sizes. Using multiple queues raises two new concerns: scalability and excess memory bandwidth usage caused by dropping packets which have been queued. We propose and evaluate an architecture using Bloom filters to evenly distribute flows among queues to improve scalability. We have also developed new intelligent packet discard algorithms that discard packets on arrival and are able to achieve performance close to that of policies that may discard packets that have already been queued. Finally, we propose better methods for evaluating the performance of fair-queueing methods. In the current literature, fair-queueing methods are evaluated based on their worst-case performance. This can exaggerate the differences among algorithms, since the worst-case behavior is dependent on the the precise timing of packet arrivals. This work seeks to understand what happens under more typical circumstances

    Improved algorithms for TCP congestion control

    Get PDF
    Reliable and efficient data transfer on the Internet is an important issue. Since late 70’s the protocol responsible for that has been the de facto standard TCP, which has proven to be successful through out the years, its self-managed congestion control algorithms have retained the stability of the Internet for decades. However, the variety of existing new technologies such as high-speed networks (e.g. fibre optics) with high-speed long-delay set-up (e.g. cross-Atlantic links) and wireless technologies have posed lots of challenges to TCP congestion control algorithms. The congestion control research community proposed solutions to most of these challenges. This dissertation adds to the existing work by: firstly tackling the highspeed long-delay problem of TCP, we propose enhancements to one of the existing TCP variants (part of Linux kernel stack). We then propose our own variant: TCP-Gentle. Secondly, tackling the challenge of differentiating the wireless loss from congestive loss in a passive way and we propose a novel loss differentiation algorithm which quantifies the noise in packet inter arrival times and use this information together with the span (ratio of maximum to minimum packet inter arrival times) to adapt the multiplicative decrease factor according to a predefined logical formula. Finally, extending the well-known drift model of TCP to account for wireless loss and some hypothetical cases (e.g. variable multiplicative decrease), we have undertaken stability analysis for the new version of the model

    A TCP-layer name service

    Get PDF
    Mestrado em Engenharia Electrónica e TelecomunicaçõesA Internet ´e hoje a maior rede mundial mas para al´em disso, ´e tamb´em e essencialmente um meio de disponibiliza¸c˜ao de acesso a conhecimento e a servi¸cos diversos. Tendo como base o protocolo de encaminhamento IP, ´e poss´ıvel endere¸car e comunicar com pessoas, servi¸cos, m´aquinas e dispositivos variados. Uma forma de comunica¸c˜ao usual assenta no protocolo TCP, que permite um di´alogo bidirecional entre servi¸cos locais e/ou remotos, com tolerˆancia e recupera¸c˜ao face a erros e perda de pacotes. No TCP, um servi¸co ´e identificado pelo n´umero do porto a que fica associado, o que tem algumas consequˆencias menos positivas. A mais ´obvia ´e o varrimento de portos (port scanning) para posteriores tentativas de ataque a vulnerabilidades nos servi¸cos identificados/associados a esses portos. Esta tese pretende extender o conceito de endere¸camento dum determinado servi¸co associando-o primordialmente a um nome, ou seja, dotar o TCP dum servi¸co pr´oprio de resolu¸c˜ao de nomes. A fase de estabelecimento da liga¸c˜ao TCP, baseada no three-way handshake, pode ser substancialmente evolu´ıda para suportar mecanismos de resolu¸c˜ao e de autentica¸c˜ao. A solu¸c˜ao encontrada tem a seguran¸ca sempre como um aspecto presente e essencial, por forma a combater diversos tipos de ataque. A resolu¸c˜ao de nomes sugerida pode ser integrada com mecanismos de autentica¸c˜ao/valida¸c˜ao atrav´es do uso de dom´ınios de interpreta¸c˜ao (DOI - domain of interpretation). Os DOIs possibilitam uma forma flex´ıvel de adicionar mecanismos de resolu¸c˜ao e autentica¸c˜ao mais ou menos complexos ao pr´oprio estabelecimento da liga¸c˜ao TCP. ABSTRACT: Internet is the largest network deployed worldwide but besides that it’s also and essentially a way of accessing and distributing knowledge and a way to to interact with services. By using the IP routing protocol it’s possible to address and communicate with other persons, services, hosts or network enabled devices. An usual way for establishing a dialogue between internet endpoints is based on the TCP protocol, permitting a bidirectional, reliable and fault-tolerant data exchange. In TCP a service is identified by an associated port number which by itself has some less positive consequences. The obvious one consists on guessing which services are available by find out the available port numbers (port scanning) so that attacks on service vulnerabilities can take place. The purpose of this thesis is to extend the current concept used for addressing TCP services by associating them with names, or simply to provide TCP an in-band name resolution. The connection establishment phase, three-way handshake, can be improved in order to support simple name resolution mechanisms or even complex authentication. Security aspects towards avoiding attacks was a major concern that is present in the foundations of the proposed architecture. The name resolution model can be integrated with several mechanisms for authentication/validation, implemented as logic defined within domains of interpretation (DOI). DOIs allow a flexible and extensible way for adding those mechanisms to the connection establishment procedures of TCP
    corecore