670 research outputs found

    A Survey on TCP-Friendly Congestion Control (extended version)

    Full text link
    New trends in communication, in particular the deployment of multicast and real-time audio/video streaming applications, are likely to increase the percentage of non-TCP traffic in the Internet. These applications rarely perform congestion control in a TCP-friendly manner, i.e., they do not share the available bandwidth fairly with applications built on TCP, such as web browsers, FTP- or email-clients. The Internet community strongly fears that the current evolution could lead to a congestion collapse and starvation of TCP traffic. For this reason, TCP-friendly protocols are being developed that behave fairly with respect to co-existent TCP flows. In this article, we present a survey of current approaches to TCP-friendliness and discuss their characteristics. Both unicast and multicast congestion control protocols are examined, and an evaluation of the different approaches is presented

    A Model-based Scalable Reliable Multicast Transport Protocol for Satellite Networks

    Get PDF
    In this paper, we design a new scalable reliable multicast transport protocol for satellite networks (RMT). This paper is the extensions of paper in [18]. The proposed protocol does not require inspection and/or interception of packets at intermediate nodes. The protocol would not require any modification of satellites, which could be bent pipe satellites or onboard processing satellites. The proposed protocol is divided in 2 parts: error control part and congestion control part. In error control part, we intend to solve feedback implosion and improve scalability by using a new hybrid of ARQ (Auto Repeat Request) and adaptive forward error correction (AFEC). The AFEC algorithm adapts proactive redundancy levels following the number of receivers and average packet loss rate. This leads to a number of transmissions and the number of feedback signals are virtually independent of the number of receivers. Therefore, wireless link utilization used by the proposed protocol is virtually independent of the number of multicast receivers. In congestion control part, the proposed protocol employs a new window-based congestion control scheme, which is optimized for satellite networks. To be fair to the other traffics, the congestion control mimics congestion control in the wellknown Transmission Control Protocol (TCP) which relies on “packet conservation” principle. To reduce feedback implosion, only a few receivers, ACKers, are selected to report the receiving status. In addition, in order to avoid “drop-to-zero” problem, we use a new simple wireless loss filter algorithm. This loss filter algorithm significantly reduces the probability of the congestion window size to be unnecessarily reduced because of common wireless losses. Furthermore, to improve achievable throughput, we employ slow start threshold adaptation based on estimated bandwidth. The congestion control also deals with variations in network conditions by dynamically electing ACKers

    Poor Man's Content Centric Networking (with TCP)

    Get PDF
    A number of different architectures have been proposed in support of data-oriented or information-centric networking. Besides a similar visions, they share the need for designing a new networking architecture. We present an incrementally deployable approach to content-centric networking based upon TCP. Content-aware senders cooperate with probabilistically operating routers for scalable content delivery (to unmodified clients), effectively supporting opportunistic caching for time-shifted access as well as de-facto synchronous multicast delivery. Our approach is application protocol-independent and provides support beyond HTTP caching or managed CDNs. We present our protocol design along with a Linux-based implementation and some initial feasibility checks

    Issues in designing transport layer multicast facilities

    Get PDF
    Multicasting denotes a facility in a communications system for providing efficient delivery from a message's source to some well-defined set of locations using a single logical address. While modem network hardware supports multidestination delivery, first generation Transport Layer protocols (e.g., the DoD Transmission Control Protocol (TCP) (15) and ISO TP-4 (41)) did not anticipate the changes over the past decade in underlying network hardware, transmission speeds, and communication patterns that have enabled and driven the interest in reliable multicast. Much recent research has focused on integrating the underlying hardware multicast capability with the reliable services of Transport Layer protocols. Here, we explore the communication issues surrounding the design of such a reliable multicast mechanism. Approaches and solutions from the literature are discussed, and four experimental Transport Layer protocols that incorporate reliable multicast are examined

    IETF standardization in the field of the Internet of Things (IoT): a survey

    Get PDF
    Smart embedded objects will become an important part of what is called the Internet of Things. However, the integration of embedded devices into the Internet introduces several challenges, since many of the existing Internet technologies and protocols were not designed for this class of devices. In the past few years, there have been many efforts to enable the extension of Internet technologies to constrained devices. Initially, this resulted in proprietary protocols and architectures. Later, the integration of constrained devices into the Internet was embraced by IETF, moving towards standardized IP-based protocols. In this paper, we will briefly review the history of integrating constrained devices into the Internet, followed by an extensive overview of IETF standardization work in the 6LoWPAN, ROLL and CoRE working groups. This is complemented with a broad overview of related research results that illustrate how this work can be extended or used to tackle other problems and with a discussion on open issues and challenges. As such the aim of this paper is twofold: apart from giving readers solid insights in IETF standardization work on the Internet of Things, it also aims to encourage readers to further explore the world of Internet-connected objects, pointing to future research opportunities

    TCP FTAT (Fast Transmit Adaptive Transmission): a New End-To-End Congestion Control Algorithm

    Get PDF
    Congestion Control in TCP is the algorithm that controls allocation of network resources for a number of competing users sharing a network. The nature of computer networks, which can be described from the TCP protocol perspective as unknown resources for unknown traffic of users, means that the functionality of the congestion control algorithm in TCP requires explicit feedback from the network on which it operates. Unfortunately this is not the way it works with TCP, as one of the fundamental principles of the TCP protocol is to be end-to-end, in order to be able to operate on any network, which can consist of hundreds of routers and hundreds of links with varying bandwidth and capacities. This fact requires the Congestion Control algorithm to be adaptive by nature, to adapt to the network environment under any given circumstances and to obtain the required feedback implicitly through observation and measurements. In this thesis we propose a new TCP end-to-end congestion control algorithm that provides performance improvements over existing TCP congestion control algorithms in computer networks in general, and an even greater improvement in wireless and/or high bandwidth- delay product network

    TCP FTAT (Fast Transmit Adaptive Transmission): a New End-To-End Congestion Control Algorithm

    Get PDF
    Congestion Control in TCP is the algorithm that controls allocation of network resources for a number of competing users sharing a network. The nature of computer networks, which can be described from the TCP protocol perspective as unknown resources for unknown traffic of users, means that the functionality of the congestion control algorithm in TCP requires explicit feedback from the network on which it operates. Unfortunately this is not the way it works with TCP, as one of the fundamental principles of the TCP protocol is to be end-to-end, in order to be able to operate on any network, which can consist of hundreds of routers and hundreds of links with varying bandwidth and capacities. This fact requires the Congestion Control algorithm to be adaptive by nature, to adapt to the network environment under any given circumstances and to obtain the required feedback implicitly through observation and measurements. In this thesis we propose a new TCP end-to-end congestion control algorithm that provides performance improvements over existing TCP congestion control algorithms in computer networks in general, and an even greater improvement in wireless and/or high bandwidth- delay product network

    A Design and Prototyping of In-Network Processing Platform to Enable Adaptive Network Services

    Get PDF
    The explosive growth of the usage along with a greater diversification of communication technologies and applications imposes the Internet to manage further scalability and diversity, requiring more adaptive and flexible sharing schemes of network resources. Especially when a number of large-scale distributed applications concurrently share the resource, efficacy of comprehensive usage of network, computation, and storage resources is needed from the viewpoint of information processing performance. Therefore, a reconsideration of the coordination and partitioning of functions between networks (providers) and applications (users) has become a recent research topic. In this paper, we first address the need and discuss the feasibility of adaptive network services by introducing special processing nodes inside the network. Then, a design and an implementation of an advanced relay node platform are presented, by which we can easily prototype and test a variety of advanced in-network processing on Linux and off-the-shelf PCs. A key feature of the proposed platform is that integration between kernel and userland spaces enables to easily and quickly develop various advanced relay processing. Finally, on the top of the advanced relay node platform, we implement and test an adaptive packet compression scheme that we previously proposed. The experimental results show the feasibility of both the developed platform and the proposed adaptive packet compression
    corecore