30 research outputs found

    Avaliação de um SGBD replicado usando simulação de redes

    Get PDF
    A replicação de sistemas de gestão de bases de dados (SGBD) é um mecanismo fundamental para a fiabilidade de sistemas de informação. Em sistemas geograficamente distribuídos é ainda útil na recuperação de desas- tres e disponibilidade ubíqua de dados. Uma técnica de replicação recentemente proposta é a Database State Ma- chine (DBSM), que promete aliar fiabilidade a elevado desempenho tirando partido de sistemas de comunicação em grupo. A avaliação do desempenho desta técnica tem no entanto sido efectuada com redes de comunicação demasiado simples ou irrealistas e com uma carga não representativa. Este artigo propõe uma avaliação rigorosa de uma concretização desta técnica de replicação, aliando um modelo de simulação realista de redes de comunicação com uma geração de carga efectuada de acordo com os padrões elaborados pelo Transaction Processing Council (TPC). Os resultados obtidos confirmam o interesse desta técnica em redes locais, mas mostram que o seu desempenho é condicionado pelas características da rede e da carga.FCT no âmbito do projecto ESCADA - POSI-CHS-33792-

    Smartacking: Improving TCP Performance from the Receiving End

    Get PDF
    We present smartacking, a technique that improves performance of Transmission Control Protocol (TCP) via adaptive generation of acknowledgments (ACKs) at the receiver. When the bottleneck link is underutilized, the receiver transmits an ACK for each delivered data segment and thereby allows the connection to acquire the available capacity promptly. When the bottleneck link is at its capacity, the smartacking receiver sends ACKs with a lower frequency reducing the control traffic overhead and slowing down the congestion window growth to utilize the network capacity more effectively. To promote quick deployment of the technique, our primary implementation of smartacking modifies only the receiver. This implementation estimates the sender\u27s congestion window using a novel algorithm of independent interest. We also consider different implementations of smartacking where the receiver relies on explicit assistance from the sender or network. Our experiments for a wide variety of settings show that TCP performance can substantially benefit from smartacking, especially in environments with low levels of connection multiplexing on bottleneck links. Whereas our extensive evaluation reveals no scenarios where the technique undermines the overall performance, we believe that smartacking represents a promising direction for enhancing TCP

    Self-healing and SDN: bridging the gap

    Get PDF
    Achieving high programmability has become an essential aim of network research due to the ever-increasing internet traffic. Software-Defined Network (SDN) is an emerging architecture aimed to address this need. However, maintaining accurate knowledge of the network after a failure is one of the largest challenges in the SDN. Motivated by this reality, this paper focuses on the use of self-healing properties to boost the SDN robustness. This approach, unlike traditional schemes, is not based on proactively configuring multiple (and memory-intensive) backup paths in each switch or performing a reactive and time-consuming routing computation at the controller level. Instead, the control paths are quickly recovered by local switch actions and subsequently optimized by global controller knowledge. Obtained results show that the proposed approach recovers the control topology effectively in terms of time and message load over a wide range of generated networks. Consequently, scalability issues of traditional fault recovery strategies are avoided.Postprint (published version

    An Experimental Investigation of TCP Performance in High Bandwidth-Delay Product Paths.

    Get PDF
    The performance of the Internet is determined not only by the network and hardware technologies that underlie it, but also by the software protocols that govern its use. In particular, the TCP transport protocol is responsible for carrying the great majority of traffic in the current internet, including web traffic, email, file transfers, music and video downloads. TCP provides two main functions. First, it provides functionality to detect and retransmit packets lost during a transfer thereby providing a reliable transport service to higher layer applications. Second, it enforces congestion control. That is, it seeks to match the rate at which packets are injected into the network to the available network capacity. A particular aim here is to avoid so-called congestion collapse, prevalent in the late 1980s prior to the inclusion of congestion control functionality in TCP. Over the last decade or so, the link speeds within networks have increased by several orders of magnitude. While the TCP congestion control algorithm has proved remarkably successful, it is now recognised that its performance is poor on paths with high bandwidth-delay product, e.g. see [13, 8, 14, 26, 12] and references therein. With the increasing prevalence of high speed links, this issue is becoming of widespread concern. This is reflected, for example, in the fact that the Linux operating system now employs an experimental algorithm called BIC-TCP[26] while Microsoft are actively studying new algorithms such as Compound-TCP[25]. While a number of proposals have been made to modify the TCP congestion control algorithm, all of these are still experimental and pending evaluation as they change the congestion control in new and significant ways and their effects on the network are not well understood. In fact, the basic properties of networks employing these algorithms may be very different to networks of standard TCP flows. The aim of this thesis is to address, in part, this basic observation

    An Experimental Investigation of TCP Performance in High Bandwidth-Delay Product Paths.

    Get PDF
    The performance of the Internet is determined not only by the network and hardware technologies that underlie it, but also by the software protocols that govern its use. In particular, the TCP transport protocol is responsible for carrying the great majority of traffic in the current internet, including web traffic, email, file transfers, music and video downloads. TCP provides two main functions. First, it provides functionality to detect and retransmit packets lost during a transfer thereby providing a reliable transport service to higher layer applications. Second, it enforces congestion control. That is, it seeks to match the rate at which packets are injected into the network to the available network capacity. A particular aim here is to avoid so-called congestion collapse, prevalent in the late 1980s prior to the inclusion of congestion control functionality in TCP. Over the last decade or so, the link speeds within networks have increased by several orders of magnitude. While the TCP congestion control algorithm has proved remarkably successful, it is now recognised that its performance is poor on paths with high bandwidth-delay product, e.g. see [13, 8, 14, 26, 12] and references therein. With the increasing prevalence of high speed links, this issue is becoming of widespread concern. This is reflected, for example, in the fact that the Linux operating system now employs an experimental algorithm called BIC-TCP[26] while Microsoft are actively studying new algorithms such as Compound-TCP[25]. While a number of proposals have been made to modify the TCP congestion control algorithm, all of these are still experimental and pending evaluation as they change the congestion control in new and significant ways and their effects on the network are not well understood. In fact, the basic properties of networks employing these algorithms may be very different to networks of standard TCP flows. The aim of this thesis is to address, in part, this basic observation

    Measuring TCP Congestion Control Behaviour in the Internet

    Get PDF
    The Internet is constantly changing and evolving. In this thesis the behaviour of various aspects of the implementation of TCP underlying the Internet are measured. These include measures of Initial Congestion Window (ICW), type of reaction to loss, Selective Acknowledgment (SACK) support, Explicit Congestion Notification (ECN) support. We develop a new method to measure congestion window reduction due to three duplicate ACK inferred loss. In a previous study 94% of classified servers showed window halving, whereas we found that 50% of classified servers exhibited Binary Increase Congestion control (BIC) or Cubic style behaviour, which is a departure from a Request For Comments (RFC) requirement to reduce the congestion window by at least 50%. ECN is predicted to improve Internet performance, but previous studies have revealed a low support for it 0.5%, and ECN connections failed at a high rate due to middlebox interference 9%; in this thesis we show a steady increase over time of ECN being implemented and supported 7.2%-10.3%. ECN testing of webservers with globally routable IPv6 adderesses showed a higher success rate 21.9%. Analysis of congestion control behaviour such as Tahoe, Reno and New Reno showed New Reno dominating more strongly than before, increasing from 35% to 70% of popular webservers. SACK sending analysis revealed that 45% of popular webservers implement it properly, as compared to 18% in earlier studies. SACK receiving analysis showed higher results to the earlier studies, with success increasing from 64.7% to 81.1%. For both of these SACK studies results for webservers with globally routable IPv6 addresses showed a higher success rate when errors remained low. Analysis of ICW indicates that 75% of popular webservers implement the older ICW regime of an initial congestion window of two or less packets, as compared to 96% in previous studies. The new regime of an ICW of three or four packets depending on segment size was implemented at 20%. We see from these results that RFCs do affect TCP implementation, but change can be slow. However we see that implementation and support for modern TCP features is increasing

    Implementation and performance evaluation of explicit congestion control algorithms

    Get PDF
    Estágio realizado no INESC-Porto e orientado pelo Eng.º Filipe Lameiro AbrantesTese de mestrado integrado. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 200

    Fonctionnement de TCP: problèmes et améliorations

    Get PDF
    Dans le présent article, une description détaillée de plusieurs versions d'algorithmes pour le contrôle de flux et de congestion dans TCP est donnée, dans l'ordre chronologique (et "logique") de leur apparition. Nous illustrons par des exemples simples de simulations les insuffisances de chaque version et les problèmes relevés dans quelques publications [13, 14], ainsi que ceux que nous avons constatés [3]. Nous proposons des solutions aux problèmes identifiés dans l'algorithme New-Reno, qui portent essentiellement sur la retransmission inutile de paquets. Nous proposons également une méthode permettant de détecter la perte d'une retransmission en utilisant les acquittements dupliqués. Nous analysons enfin quelques nouveaux mécanismes introduits par TCP Vegas, et en proposons des améliorations (slow-start)

    Mandatory security and performance of services in Asbestos

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 61-66).This thesis presents the design and implementation for several system services, including network access and database storage, on a new operating system design, Asbestos. Using the security mechanism provided by Asbestos, Asbestos labels, these services are used to support the construction of secure Web applications. The network and database services serve as the foundation for a Web server that supports mandatory security policies, such that even a compromised Web application cannot improperly disclose private data. The methods used in this thesis allow Web application developers to be freed from worries about flawed applications, if developers are willing to place trust in the underlying services used.by David Patrick Ziegler.M.Eng
    corecore