624 research outputs found

    Maximum Production Of Transmission Messages Rate For Service Discovery Protocols

    Get PDF
    Minimizing the number of dropped User Datagram Protocol (UDP) messages in a network is regarded as a challenge by researchers. This issue represents serious problems for many protocols particularly those that depend on sending messages as part of their strategy, such us service discovery protocols. This paper proposes and evaluates an algorithm to predict the minimum period of time required between two or more consecutive messages and suggests the minimum queue sizes for the routers, to manage the traffic and minimise the number of dropped messages that has been caused by either congestion or queue overflow or both together. The algorithm has been applied to the Universal Plug and Play (UPnP) protocol using ns2 simulator. It was tested when the routers were connected in two configurations; as a centralized and de centralized. The message length and bandwidth of the links among the routers were taken in the consideration. The result shows Better improvement in number of dropped messages `among the routers

    System Support for Bandwidth Management and Content Adaptation in Internet Applications

    Full text link
    This paper describes the implementation and evaluation of an operating system module, the Congestion Manager (CM), which provides integrated network flow management and exports a convenient programming interface that allows applications to be notified of, and adapt to, changing network conditions. We describe the API by which applications interface with the CM, and the architectural considerations that factored into the design. To evaluate the architecture and API, we describe our implementations of TCP; a streaming layered audio/video application; and an interactive audio application using the CM, and show that they achieve adaptive behavior without incurring much end-system overhead. All flows including TCP benefit from the sharing of congestion information, and applications are able to incorporate new functionality such as congestion control and adaptive behavior.Comment: 14 pages, appeared in OSDI 200

    A Survey on TCP-Friendly Congestion Control (extended version)

    Full text link
    New trends in communication, in particular the deployment of multicast and real-time audio/video streaming applications, are likely to increase the percentage of non-TCP traffic in the Internet. These applications rarely perform congestion control in a TCP-friendly manner, i.e., they do not share the available bandwidth fairly with applications built on TCP, such as web browsers, FTP- or email-clients. The Internet community strongly fears that the current evolution could lead to a congestion collapse and starvation of TCP traffic. For this reason, TCP-friendly protocols are being developed that behave fairly with respect to co-existent TCP flows. In this article, we present a survey of current approaches to TCP-friendliness and discuss their characteristics. Both unicast and multicast congestion control protocols are examined, and an evaluation of the different approaches is presented

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Analysis domain model for shared virtual environments

    Get PDF
    The field of shared virtual environments, which also encompasses online games and social 3D environments, has a system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model

    Scalable reliable on-demand media streaming protocols

    Get PDF
    This thesis considers the problem of delivering streaming media, on-demand, to potentially large numbers of concurrent clients. The problem has motivated the development in prior work of scalable protocols based on multicast or broadcast. However, previous protocols do not allow clients to efficiently: 1) recover from packet loss; 2) share bandwidth fairly with competing flows; or 3) maximize the playback quality at the client for any given client reception rate characteristics. In this work, new protocols, namely Reliable Periodic Broadcast (RPB) and Reliable Bandwidth Skimming (RBS), are developed that efficiently recover from packet loss and achieve close to the best possible server bandwidth scalability for a given set of client characteristics. To share bandwidth fairly with competing traffic such as TCP, these protocols can employ the Vegas Multicast Rate Control (VMRC) protocol proposed in this work. The VMRC protocol exhibits TCP Vegas-like behavior. In comparison to prior rate control protocols, VMRC provides less oscillatory reception rates to clients, and operates without inducing packet loss when the bottleneck link is lightly loaded. The VMRC protocol incorporates a new technique for dynamically adjusting the TCP Vegas threshold parameters based on measured characteristics of the network. This technique implements fair sharing of network resources with other types of competing flows, including widely deployed versions of TCP such as TCP Reno. This fair sharing is not possible with the previously defined static Vegas threshold parameters. The RPB protocol is extended to efficiently support quality adaptation. The Optimized Heterogeneous Periodic Broadcast (HPB) is designed to support a range of client reception rates and efficiently support static quality adaptation by allowing clients to work-ahead before beginning playback to receive a media file of the desired quality. A dynamic quality adaptation technique is developed and evaluated which allows clients to achieve more uniform playback quality given time-varying client reception rates

    Active congestion control using ABCD (available bandwidth-based congestion detection).

    Get PDF
    With the growth of the Internet, the problem of congestion has attained the distinction of being a perennial problem. The Internet community has been trying several approaches for improved congestion control techniques. The end-to-end approach is considered to be the most robust one and it has served quite well until recently, when researchers started to explore the information available at the intermediate node level. This approach triggered a new field called Active Networks where intermediate nodes have a much larger role to play than that of the naive nodes. This thesis proposes an active congestion control (ACC) scheme based on Available Bandwidth-based Congestion Detection (ABCD), which regulates the traffic according to network conditions. Dynamic changes in the available bandwidth can trigger re-negotiation of flow rate. We have introduced packet size adjustment at the intermediate router in addition to rate control at sender node, scaled according to the available bandwidth, which is estimated using three packet probes. To verify the improved scheme, we have extended Ted Faber\u27s ACC work in NS-2 simulator. With this simulator we verify ACC-ABCD\u27s gains such as a marginal improvement in average TCP throughput at each endpoint, fewer packet drops and improved fairness index. Our tests on NS-2 prove that the ACC-ABCD technique yields better results as compared to TCP congestion control with or without the cross traffic. Source: Masters Abstracts International, Volume: 43-03, page: 0870. Adviser: A. K. Aggarwal. Thesis (M.Sc.)--University of Windsor (Canada), 2004
    • …
    corecore