36 research outputs found

    Improving latency for interactive, thin-stream applications over reliable transport

    Get PDF
    A large number of network services use IP and reliable transport protocols. For applications with constant pressure of data, loss is handled satisfactorily, even if the application is latencysensitive. For applications with data streams consisting of intermittently sent small packets, users experience extreme latencies more frequently. Due to the fact that such thin-stream applications are commonly interactive and time-dependent, increased delay may severely reduce the experienced quality of the application. When TCP is used for thin-stream applications, events of highly increased latency are common, caused by the way retransmissions are handled. Other transport protocols that are deployed in the Internet, like SCTP, model their congestion control and reliability on TCP, as do many frameworks that provide reliability on top of unreliable transport. We have tested several application- and transport layer solutions, and based on our findings, we propose sender-side enhancements that reduce the application-layer latency in a manner that is compatible with unmodified receivers. We have implemented the mechanisms as modifications to the Linux kernel, both for TCP and SCTP. The mechanisms are dynamically triggered so that they are only active when the kernel identifies the stream as thin. To evaluate the performance of our modifications, we have conducted a wide range of experiments using replayed thin-stream traces captured from real applications as well as artificially generated thin-stream data patterns. From the experiments, effects on latency, redundancy and fairness were evaluated. The analysis of the performed experiments shows great improvements in latency for thin streams when applying the modifications. Surveys where users evaluate their experience of several applicationsā€™ quality using the modified transport mechanisms confirmed the improvements seen in the statistical analysis. The positive effects of our modifications were shown to be possible without notable effects on fairness for competing streams. We therefore conclude that it is advisable to handle thin streams separately, using our modifications, when transmitting over reliable protocols to reduce retransmission latency

    Reducing Internet Latency : A Survey of Techniques and their Merit

    Get PDF
    Bob Briscoe, Anna Brunstrom, Andreas Petlund, David Hayes, David Ros, Ing-Jyh Tsang, Stein Gjessing, Gorry Fairhurst, Carsten Griwodz, Michael WelzlPeer reviewedPreprin

    Managing Network Delay for Browser Multiplayer Games

    Get PDF
    Latency is one of the key performance elements affecting the quality of experience (QoE) in computer games. Latency in the context of games can be defined as the time between the user input and the result on the screen. In order for the QoE to be satisfactory the game needs to be able to react fast enough to player input. In networked multiplayer games, latency is composed of network delay and local delays. Some major sources of network delay are queuing delay and head-of-line (HOL) blocking delay. Network delay in the Internet can be even in the order of seconds. In this thesis we discuss what feasible networking solutions exist for browser multiplayer games. We conduct a literature study to analyze the Differentiated Services architecture, some salient Active Queue Management (AQM) algorithms (RED, PIE, CoDel and FQ-CoDel), the Explicit Congestion Notification (ECN) concept and network protocols for web browser (WebSocket, QUIC and WebRTC). RED, PIE and CoDel as single-queue implementations would be sub-optimal for providing low latency to game traffic. FQ-CoDel is a multi-queue AQM and provides flow separation that is able to prevent queue-building bulk transfers from notably hampering latency-sensitive flows. WebRTC Data-Channel seems promising for games since it can be used for sending arbitrary application data and it can avoid HOL blocking. None of the network protocols, however, provide completely satisfactory support for the transport needs of multiplayer games: WebRTC is not designed for client-server connections, QUIC is not designed for traffic patterns typical for multiplayer games and WebSocket would require parallel connections to mitigate the effects of HOL blocking

    Network Factors Influencing Packet Loss in Online Games

    Get PDF
    In real-time communications it is often vital that data arrive at its destination in a timely fashion. Whether it is the user experience of online games, or the reliability of tele-surgery, a reliable, consistent and predictable communications channel between source and destination is important. However, the Internet as we know it was designed to ensure that data will arrive at the desired destination instead of being designed for predictable, low-latency communication. Data traveling from point to point on the Internet is comprised of smaller packages known as packets. As these packets traverse the Internet, they encounter routers or similar devices that will often queue the packets before sending them toward their destination. Queued packets introduces a delay that depends greatly on the router configuration and the number of other packets that exist on the network. In times of high demand, packets may be discarded by the router or even lost in transmission. Protocols exist that retransmit lost packets, but these protocols introduce additional overhead and delays - costs that may be prohibitive in some applications. Being able to predict when packets may be delayed or lost could allow applications to compensate for unreliable data channels. In this thesis I investigate the effects of cross traffic and router configuration on a low bandwidth traffic stream such as that which is common in games. The experiments investigate the effects of cross traffic packet size, bit-rate, inter-packet timing and protocol used. The experiments also investigate router configurations including queue management type and the number of queues. These experiments are compared to real-world data and a mitigation strategy, where n previous packets are bundled with each new packet, is applied to both the simulated data and the real-world captures. The experiments indicate that most of the parameters explored had an impact on the packet loss. However, the real world data and simulated data differ and would require additional work to attempt to apply the lessons learned to real world applications. The mitigation strategy appeared to work well, allowing 90\% of all runs to complete without data loss. However, the mitigation strategy was implemented analytically and the actual implementation and testing has been left for future work

    Satellite Networks: Architectures, Applications, and Technologies

    Get PDF
    Since global satellite networks are moving to the forefront in enhancing the national and global information infrastructures due to communication satellites' unique networking characteristics, a workshop was organized to assess the progress made to date and chart the future. This workshop provided the forum to assess the current state-of-the-art, identify key issues, and highlight the emerging trends in the next-generation architectures, data protocol development, communication interoperability, and applications. Presentations on overview, state-of-the-art in research, development, deployment and applications and future trends on satellite networks are assembled

    End-to-End Resilience Mechanisms for Network Transport Protocols

    Get PDF
    The universal reliance on and hence the need for resilience in network communications has been well established. Current transport protocols are designed to provide fixed mechanisms for error remediation (if any), using techniques such as ARQ, and offer little or no adaptability to underlying network conditions, or to different sets of application requirements. The ubiquitous TCP transport protocol makes too many assumptions about underlying layers to provide resilient end-to-end service in all network scenarios, especially those which include significant heterogeneity. Additionally the properties of reliability, performability, availability, dependability, and survivability are not explicitly addressed in the design, so there is no support for resilience. This dissertation presents considerations which must be taken in designing new resilience mechanisms for future transport protocols to meet service requirements in the face of various attacks and challenges. The primary mechanisms addressed include diverse end-to-end paths, and multi-mode operation for changing network conditions

    A distributed intelligent network based on CORBA and SCTP

    Get PDF
    The telecommunications services marketplace is undergoing radical change due to the rapid convergence and evolution of telecommunications and computing technologies. Traditionally telecommunications service providersā€™ ability to deliver network services has been through Intelligent Network (IN) platforms. The IN may be characterised as envisioning centralised processing of distributed service requests from a limited number of quasi-proprietary nodes with inflexible connections to the network management system and third party networks. The nodes are inter-linked by the operatorā€™s highly reliable but expensive SS.7 network. To leverage this technology as the core of new multi-media services several key technical challenges must be overcome. These include: integration of the IN with new technologies for service delivery, enhanced integration with network management services, enabling third party service providers and reducing operating costs by using more general-purpose computing and networking equipment. In this thesis we present a general architecture that defines the framework and techniques required to realise an open, flexible, middleware (CORBA)-based distributed intelligent network (DIN). This extensible architecture naturally encapsulates the full range of traditional service network technologies, for example IN (fixed network), GSM-MAP and CAMEL. Fundamental to this architecture are mechanisms for inter-working with the existing IN infrastructure, to enable gradual migration within a domain and inter-working between IN and DIN domains. The DIN architecture compliments current research on third party service provision, service management and integration Internet-based servers. Given the dependence of such a distributed service platform on the transport network that links computational nodes, this thesis also includes a detailed study of the emergent IP-based telecommunications transport protocol of choice, Stream Control Transmission Protocol (SCTP). In order to comply with the rigorous performance constraints of this domain, prototyping, simulation and analytic modelling of the DIN based on SCTP have been carried out. This includes the first detailed analysis of the operation of SCTP congestion controls under a variety of network conditions leading to a number of suggested improvements in the operation of the protocol. Finally we describe a new analytic framework for dimensioning networks with competing multi-homed SCTP flows in a DIN. This framework can be used for any multi-homed SCTP network e.g. one transporting SIP or HTTP

    Virtualization of network I/O on modern operating systems

    Get PDF
    Network I/O of modern operating systems is incomplete. In this networkage, users and their applications are still unable to control theirown traffic, even on their local host. Network I/O is a sharedresource of a host machine, and traditionally, to address problemswith a shared resource, system research has virtualized the resource.Therefore, it is reasonable to ask if the virtualization can providesolutions to problems in network I/O of modern operating systems, inthe same way as the other components of computer systems, such asmemory and CPU. With the aim of establishing the virtualization ofnetwork I/O as a design principle of operating systems, thisdissertation first presents a virtualization model, hierarchicalvirtualization of network interface. Systematic evaluation illustratesthat the virtualization model possesses desirable properties forvirtualization of network I/O, namely flexible control granularity,resource protection, partitioning of resource consumption, properaccess control and generality as a control model. The implementedprototype exhibits practical performance with expected functionality,and allowed flexible and dynamic network control by users andapplications, unlike existing systems designed solely for systemadministrators. However, because the implementation was hardcoded inkernel source code, the prototype was not perfect in its functionalcoverage and flexibility. Accordingly, this dissertation investigatedhow to decouple OS kernels and packet processing code throughvirtualization, and studied three degrees of code virtualization,namely, limited virtualization, partial virtualization, and completevirtualization. In this process, a novel programming model waspresented, based on embedded Java technology, and the prototypeimplementation exhibited the following characteristics, which aredesirable for network code virtualization. First, users program inJava to carry out safe and simple programming for packetprocessing. Second, anyone, even untrusted applications, can performinjection of packet processing code in the kernel, due to isolation ofcode execution. Third, the prototype implementation empirically provedthat such a virtualization does not jeopardize system performance.These cases illustrate advantages of virtualization, and suggest thatthe hierarchical virtualization of network interfaces can be aneffective solution to problems in network I/O of modern operatingsystems, both in the control model and in implementation
    corecore