7 research outputs found

    Is there a case for parallel connections with modern web protocols?

    Get PDF
    Modern web protocols like HTTP/2 and QUIC aim to make the web faster by addressing well-known problems of HTTP/1.1 running on top of TCP. Both HTTP/2 and QUIC are specified to run on a single connection, in contrast to the usage of multiple TCP connections in HTTP/1.1. Reducing the number of open connections brings a positive impact on the network infrastructure, besides improving fairness among applications. However, the usage of a single connection may result in poor application performance in common adverse scenarios, such as under high packet losses. In this paper we first investigate these scenarios, confirming that the use of a single connection sometimes impairs application performance. We then propose a practical solution (here called H2-Parallel) that implements multiple TCP connection mechanism for HTTP/2 in Chromium browser. We compare H2-Parallel with HTTP/1.1 over TCP, QUIC over UDP, as well as HTTP/2 over Multipath TCP, which creates parallel connections at the transport layer opaque to the application layer. Experiments with popular live websites as well as controlled emulations show that H2-Parallel is simple and effective. By opening only two connections to load a page with H2-Parallel, the page load time can be reduced substantially in adverse network conditions.Peer ReviewedPostprint (author's final draft

    Experimentation and Characterization of Mobile Broadband Networks

    Get PDF
    The Internet has brought substantial changes to our life as the main tool to access a large variety of services and applications. Internet distributed nature and technological improvements lead to new challenges for researchers, service providers, and network administrators. Internet traffic measurement and analysis is one of the most trivial and powerful tools to study such a complex environment from different aspects. Mobile BroadBand (MBB) networks have become one of the main means to access the Internet. MBB networks are evolving at a rapid pace with technology enhancements that promise drastic improvements in capacity, connectivity, and coverage, i.e., better performance in general. Open experimentation with operational MBB networks in the wild is currently a fundamental requirement of the research community in its endeavor to address the need for innovative solutions for mobile communications. There is a strong need for objective data relating to stability and performance of MBB (e.g., 2G, 3G, 4G, and soon-to-come 5G) networks and for tools that rigorously and scientifically assess their performance. Thus, measuring end user performance in such an environment is a challenge that calls for large-scale measurements and profound analysis of the collected data. The intertwining of technologies, protocols, and setups makes it even more complicated to design scientifically sound and robust measurement campaigns. In such a complex scenario, the randomness of the wireless access channel coupled with the often unknown operator configurations makes this scenario even more challenging. In this thesis, we introduce the MONROE measurement platform: an open access and flexible hardware-based platform for measurements on operational MBB networks. The MONROE platform enables accurate, realistic, and meaningful assessment of the performance and reliability of MBB networks. We detail the challenges we overcame while building and testing the MONROE testbed and argue our design and implementation choices accordingly. Measurements are designed to stress performance of MBB networks at different network layers by proposing scalable experiments and methodologies. We study: (i) Network layer performance, characterizing and possibly estimating the download speed offered by commercial MBB networks; (ii) End users’ Quality of Experience (QoE), specifically targeting the web performance of HTTP1.1/TLS and HTTP2 on various popular web sites; (iii) Implication of roaming in Europe, understanding the roaming ecosystem in Europe after the "Roam like Home" initiative; and (iv) A novel adaptive scheduler family with deadline is proposed for multihomed devices that only require a very coarse knowledge of the wireless bandwidth. Our results comprise different contributions in the scope of each research topic. To put it in a nutshell, we pinpoint the impact of different network configurations that further complicate the picture and hopefully contribute to the debate about performance assessment in MBB networks. The MBB users web performance shows that HTTP1.1/TLS is very similar to HTTP2 in our large-scale measurements. Furthermore, we observe that roaming is well supported for the monitored operators and the operators using the same approach for routing roaming traffic. The proposed adaptive schedulers for content upload in multihomed devices are evaluated in both numerical simulations and real mobile nodes. Simulation results show that the adaptive solutions can effectively leverage the fundamental tradeoff between the upload cost and completion time, despite unpredictable variations in available bandwidth of wireless interfaces. Experiments in the real mobile nodes provided by the MONROE platform confirm the findings

    Network Traffic Measurements, Applications to Internet Services and Security

    Get PDF
    The Internet has become along the years a pervasive network interconnecting billions of users and is now playing the role of collector for a multitude of tasks, ranging from professional activities to personal interactions. From a technical standpoint, novel architectures, e.g., cloud-based services and content delivery networks, innovative devices, e.g., smartphones and connected wearables, and security threats, e.g., DDoS attacks, are posing new challenges in understanding network dynamics. In such complex scenario, network measurements play a central role to guide traffic management, improve network design, and evaluate application requirements. In addition, increasing importance is devoted to the quality of experience provided to final users, which requires thorough investigations on both the transport network and the design of Internet services. In this thesis, we stress the importance of users’ centrality by focusing on the traffic they exchange with the network. To do so, we design methodologies complementing passive and active measurements, as well as post-processing techniques belonging to the machine learning and statistics domains. Traffic exchanged by Internet users can be classified in three macro-groups: (i) Outbound, produced by users’ devices and pushed to the network; (ii) unsolicited, part of malicious attacks threatening users’ security; and (iii) inbound, directed to users’ devices and retrieved from remote servers. For each of the above categories, we address specific research topics consisting in the benchmarking of personal cloud storage services, the automatic identification of Internet threats, and the assessment of quality of experience in the Web domain, respectively. Results comprise several contributions in the scope of each research topic. In short, they shed light on (i) the interplay among design choices of cloud storage services, which severely impact the performance provided to end users; (ii) the feasibility of designing a general purpose classifier to detect malicious attacks, without chasing threat specificities; and (iii) the relevance of appropriate means to evaluate the perceived quality of Web pages delivery, strengthening the need of users’ feedbacks for a factual assessment

    Improving Mobile Network Performance Through Measurement-driven System Design Approaches

    Full text link
    Mobile networks are complex, dynamic, and often perform poorly. Many factors affect network performance and energy consumption: examples include highly varying network latencies and loss rates, diurnal user movement patterns in cellular networks that impact network congestion, and how radio energy states interacts with application traffic. Because mobile devices experience uniquely dynamic and complex network conditions and resource tradeoffs, incorporating ongoing, continuous measurements of network performance, resource usage and user and app behavior into mobile systems is essential in addressing the pervasive performance problems in these systems. This dissertation examines five different approaches to this problem. First, we discuss three measurement studies which help us understand mobile systems and how to improve them. The first examines how RRC state performance impacts network performance in the wild and argues carriers should measure RRC state performance from the user's perspective when managing their networks. The second looks at trends in applications' background network energy consumption, and shows that more systematic approaches are needed to manage app behavior. The third examines how Server Push, a new feature of HTTP/2, can in certain cases improve mobile performance, but shows that it is necessary to use measurements to determine if Server Push will be helpful or harmful. Two other projects show how measurements can be incorporated directly into systems that predict and manage network traffic. One project examines how a carrier can support prefetching over time spans of hours by predicting the network loads a user will see in the future and scheduling highly delay-tolerant traffic accordingly. The other examines how the network requests of mobile apps can be predicted, a first step towards an automated and general app prefetching system. Overall, measurements of network performance and app and user behavior are powerful tools in building better mobile systems.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/136944/1/sanae_1.pd

    Towards Faster Web Page Loads Over Multiple Network Paths

    Get PDF
    The rising popularity of mobile devices as the main way people access the web has fuelled a corresponding need for faster web downloads on these devices. Emerging web protocols like HTTP/2 and QUIC employ several features that minimise page load times, but fail to take advantage of the availability of at least two interfaces on today's mobile devices. On the other hand, this spread of devices with access to multiple paths has prompted the design of Multipath TCP, a transport protocol that pools bandwidth across these paths. Although MPTCP was originally evaluated for bandwidth limited bulk transfers, in this work, we determine whether using MPTCP can reduce web page load times, which are often latency bound. To investigate the behaviour of web browsing over MPTCP, we instrumented the Chrome web browser's retrieval of 300 popular web sites in sufficient detail, and computed their dependency graph structure. Furthermore, we implemented PCP, an emulation framework that uses these dependency graphs to ask "what-if" questions about the interactions between a wide range of web site designs, varied network conditions, and different web and transport protocols. Using PCP, we first confirm previous results with respect to the improvements HTTP/2 offers over HTTP/1.1. One obstacle, though, is that many web sites have been sharded to improve performance with HTTP/1.1, spreading their content across multiple subdomains. We therefore examine whether the advice to unshard these domains is beneficial. We find that unsharding is generally advantageous, but is not consistently so. Finally, we examine the behaviour of HTTP/2 over MPTCP. We find that MPTCP can improve web page load times under some regimes; in other cases, using regular TCP on the "best" path is more advantageous. We present enhancements to multipath web browsing that allow it to perform as well as or better than regular TCP on the best path
    corecore