54 research outputs found

    Improving latency for interactive, thin-stream applications over reliable transport

    Get PDF
    A large number of network services use IP and reliable transport protocols. For applications with constant pressure of data, loss is handled satisfactorily, even if the application is latencysensitive. For applications with data streams consisting of intermittently sent small packets, users experience extreme latencies more frequently. Due to the fact that such thin-stream applications are commonly interactive and time-dependent, increased delay may severely reduce the experienced quality of the application. When TCP is used for thin-stream applications, events of highly increased latency are common, caused by the way retransmissions are handled. Other transport protocols that are deployed in the Internet, like SCTP, model their congestion control and reliability on TCP, as do many frameworks that provide reliability on top of unreliable transport. We have tested several application- and transport layer solutions, and based on our findings, we propose sender-side enhancements that reduce the application-layer latency in a manner that is compatible with unmodified receivers. We have implemented the mechanisms as modifications to the Linux kernel, both for TCP and SCTP. The mechanisms are dynamically triggered so that they are only active when the kernel identifies the stream as thin. To evaluate the performance of our modifications, we have conducted a wide range of experiments using replayed thin-stream traces captured from real applications as well as artificially generated thin-stream data patterns. From the experiments, effects on latency, redundancy and fairness were evaluated. The analysis of the performed experiments shows great improvements in latency for thin streams when applying the modifications. Surveys where users evaluate their experience of several applications’ quality using the modified transport mechanisms confirmed the improvements seen in the statistical analysis. The positive effects of our modifications were shown to be possible without notable effects on fairness for competing streams. We therefore conclude that it is advisable to handle thin streams separately, using our modifications, when transmitting over reliable protocols to reduce retransmission latency

    Reducing Internet Latency : A Survey of Techniques and their Merit

    Get PDF
    Bob Briscoe, Anna Brunstrom, Andreas Petlund, David Hayes, David Ros, Ing-Jyh Tsang, Stein Gjessing, Gorry Fairhurst, Carsten Griwodz, Michael WelzlPeer reviewedPreprin

    Network Factors Influencing Packet Loss in Online Games

    Get PDF
    In real-time communications it is often vital that data arrive at its destination in a timely fashion. Whether it is the user experience of online games, or the reliability of tele-surgery, a reliable, consistent and predictable communications channel between source and destination is important. However, the Internet as we know it was designed to ensure that data will arrive at the desired destination instead of being designed for predictable, low-latency communication. Data traveling from point to point on the Internet is comprised of smaller packages known as packets. As these packets traverse the Internet, they encounter routers or similar devices that will often queue the packets before sending them toward their destination. Queued packets introduces a delay that depends greatly on the router configuration and the number of other packets that exist on the network. In times of high demand, packets may be discarded by the router or even lost in transmission. Protocols exist that retransmit lost packets, but these protocols introduce additional overhead and delays - costs that may be prohibitive in some applications. Being able to predict when packets may be delayed or lost could allow applications to compensate for unreliable data channels. In this thesis I investigate the effects of cross traffic and router configuration on a low bandwidth traffic stream such as that which is common in games. The experiments investigate the effects of cross traffic packet size, bit-rate, inter-packet timing and protocol used. The experiments also investigate router configurations including queue management type and the number of queues. These experiments are compared to real-world data and a mitigation strategy, where n previous packets are bundled with each new packet, is applied to both the simulated data and the real-world captures. The experiments indicate that most of the parameters explored had an impact on the packet loss. However, the real world data and simulated data differ and would require additional work to attempt to apply the lessons learned to real world applications. The mitigation strategy appeared to work well, allowing 90\% of all runs to complete without data loss. However, the mitigation strategy was implemented analytically and the actual implementation and testing has been left for future work

    Modeling and acceleration of content delivery in world wide web

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Optimization inWeb Caching: Cache Management, Capacity Planning, and Content Naming

    Full text link
    Caching is fundamental to performance in distributed information retrieval systems such as the World Wide Web. This thesis introduces novel techniques for optimizing performance and cost-effectiveness in Web cache hierarchies. When requests are served by nearby caches rather than distant servers, server loads and network traffic decrease and transactions are faster. Cache system design and management, however, face extraordinary challenges in loosely-organized environments like the Web, where the many components involved in content creation, transport, and consumption are owned and administered by different entities. Such environments call for decentralized algorithms in which stakeholders act on local information and private preferences. In this thesis I consider problems of optimally designing new Web cache hierarchies and optimizing existing ones. The methods I introduce span the Web from point of content creation to point of consumption: I quantify the impact of content-naming practices on cache performance; present techniques for variable-quality-of-service cache management; describe how a decentralized algorithm can compute economically-optimal cache sizes in a branching two-level cache hierarchy; and introduce a new protocol extension that eliminates redundant data transfers and allows “dynamic” content to be cached consistently. To evaluate several of my new methods, I conducted trace-driven simulations on an unprecedented scale. This in turn required novel workload measurement methods and efficient new characterization and simulation techniques. The performance benefits of my proposed protocol extension are evaluated using two extraordinarily large and detailed workload traces collected in a traditional corporate network environment and an unconventional thin-client system. My empirical research follows a simple but powerful paradigm: measure on a large scale an important production environment’s exogenous workload; identify performance bounds inherent in the workload, independent of the system currently serving it; identify gaps between actual and potential performance in the environment under study; and finally devise ways to close these gaps through component modifications or through improved inter-component integration. This approach may be applicable to a wide range of Web services as they mature.Ph.D.Computer Science and EngineeringUniversity of Michiganhttp://deepblue.lib.umich.edu/bitstream/2027.42/90029/1/kelly-optimization_web_caching.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/90029/2/kelly-optimization_web_caching.ps.bz

    Satellite Networks: Architectures, Applications, and Technologies

    Get PDF
    Since global satellite networks are moving to the forefront in enhancing the national and global information infrastructures due to communication satellites' unique networking characteristics, a workshop was organized to assess the progress made to date and chart the future. This workshop provided the forum to assess the current state-of-the-art, identify key issues, and highlight the emerging trends in the next-generation architectures, data protocol development, communication interoperability, and applications. Presentations on overview, state-of-the-art in research, development, deployment and applications and future trends on satellite networks are assembled

    A distributed intelligent network based on CORBA and SCTP

    Get PDF
    The telecommunications services marketplace is undergoing radical change due to the rapid convergence and evolution of telecommunications and computing technologies. Traditionally telecommunications service providers’ ability to deliver network services has been through Intelligent Network (IN) platforms. The IN may be characterised as envisioning centralised processing of distributed service requests from a limited number of quasi-proprietary nodes with inflexible connections to the network management system and third party networks. The nodes are inter-linked by the operator’s highly reliable but expensive SS.7 network. To leverage this technology as the core of new multi-media services several key technical challenges must be overcome. These include: integration of the IN with new technologies for service delivery, enhanced integration with network management services, enabling third party service providers and reducing operating costs by using more general-purpose computing and networking equipment. In this thesis we present a general architecture that defines the framework and techniques required to realise an open, flexible, middleware (CORBA)-based distributed intelligent network (DIN). This extensible architecture naturally encapsulates the full range of traditional service network technologies, for example IN (fixed network), GSM-MAP and CAMEL. Fundamental to this architecture are mechanisms for inter-working with the existing IN infrastructure, to enable gradual migration within a domain and inter-working between IN and DIN domains. The DIN architecture compliments current research on third party service provision, service management and integration Internet-based servers. Given the dependence of such a distributed service platform on the transport network that links computational nodes, this thesis also includes a detailed study of the emergent IP-based telecommunications transport protocol of choice, Stream Control Transmission Protocol (SCTP). In order to comply with the rigorous performance constraints of this domain, prototyping, simulation and analytic modelling of the DIN based on SCTP have been carried out. This includes the first detailed analysis of the operation of SCTP congestion controls under a variety of network conditions leading to a number of suggested improvements in the operation of the protocol. Finally we describe a new analytic framework for dimensioning networks with competing multi-homed SCTP flows in a DIN. This framework can be used for any multi-homed SCTP network e.g. one transporting SIP or HTTP

    Fourth NASA Goddard Conference on Mass Storage Systems and Technologies

    Get PDF
    This report contains copies of all those technical papers received in time for publication just prior to the Fourth Goddard Conference on Mass Storage and Technologies, held March 28-30, 1995, at the University of Maryland, University College Conference Center, in College Park, Maryland. This series of conferences continues to serve as a unique medium for the exchange of information on topics relating to the ingestion and management of substantial amounts of data and the attendant problems involved. This year's discussion topics include new storage technology, stability of recorded media, performance studies, storage system solutions, the National Information infrastructure (Infobahn), the future for storage technology, and lessons learned from various projects. There also will be an update on the IEEE Mass Storage System Reference Model Version 5, on which the final vote was taken in July 1994

    Proceedings of the First Karlsruhe Service Summit Workshop - Advances in Service Research, Karlsruhe, Germany, February 2015 (KIT Scientific Reports ; 7692)

    Get PDF
    Since April 2008 KSRI fosters interdisciplinary research in order to support and advance the progress in the service domain. KSRI brings together academia and industry while serving as a European research hub with respect to service science. For KSS2015 Research Workshop, we invited submissions of theoretical and empirical research dealing with the relevant topics in the context of services including energy, mobility, health care, social collaboration, and web technologies
    corecore