515 research outputs found

    SPDY vs HTTP/1.1: An Empirical Evaluation of Network Protocol Performance

    Get PDF
    As the Internet evolves, the reduction of page load time has an increased importance. Additionally, the application layer is an ideal place to change as it avoids altering existing implementations. HTTP was designed not realizing what the Internet would look like at the present, and is thus outdated. Google has developed a protocol as a replacement for HTTP called SPDY. The major improvements are header compression to avoid network congestion, multiplexing to maximize throughput, and allowing the server to suggest or even push unsolicited data. My research aims to measure throughput of the two protocols by varying both the latency and packet loss of a network

    The Effect of Network and Infrastructural Variables on SPDY's Performance

    Get PDF
    HTTP is a successful Internet technology on top of which a lot of the web resides. However, limitations with its current specification, i.e. HTTP/1.1, have encouraged some to look for the next generation of HTTP. In SPDY, Google has come up with such a proposal that has growing community acceptance, especially after being adopted by the IETF HTTPbis-WG as the basis for HTTP/2.0. SPDY has the potential to greatly improve web experience with little deployment overhead. However, we still lack an understanding of its true potential in different environments. This paper seeks to resolve these issues, offering a comprehensive evaluation of SPDY's performance using extensive experiments. We identify the impact of network characteristics and website infrastructure on SPDY's potential page loading benefits, finding that these factors are decisive for SPDY and its optimal deployment strategy. Through this, we feed into the wider debate regarding HTTP/2.0, exploring the key aspects that impact the performance of this future protocol

    Performance analysis of next generation web access via satellite

    Get PDF
    Acknowledgements This work was partially funded by the European Union's Horizon 2020 research and innovation programme under grant agreement No. 644334 (NEAT). The views expressed are solely those of the author(s).Peer reviewedPostprin

    Efficient HTTP based I/O on very large datasets for high performance computing with the libdavix library

    Full text link
    Remote data access for data analysis in high performance computing is commonly done with specialized data access protocols and storage systems. These protocols are highly optimized for high throughput on very large datasets, multi-streams, high availability, low latency and efficient parallel I/O. The purpose of this paper is to describe how we have adapted a generic protocol, the Hyper Text Transport Protocol (HTTP) to make it a competitive alternative for high performance I/O and data analysis applications in a global computing grid: the Worldwide LHC Computing Grid. In this work, we first analyze the design differences between the HTTP protocol and the most common high performance I/O protocols, pointing out the main performance weaknesses of HTTP. Then, we describe in detail how we solved these issues. Our solutions have been implemented in a toolkit called davix, available through several recent Linux distributions. Finally, we describe the results of our benchmarks where we compare the performance of davix against a HPC specific protocol for a data analysis use case.Comment: Presented at: Very large Data Bases (VLDB) 2014, Hangzho
    • ā€¦
    corecore