31,734 research outputs found

    Measuring Web Speed From Passive Traces

    Get PDF
    Understanding the quality of Experience (QoE) of web brows- ing is key to optimize services and keep users’ loyalty. This is crucial for both Content Providers and Internet Service Providers (ISPs). Quality is subjective, and the complexity of today’s pages challenges its measurement. OnLoad time and SpeedIndex are notable attempts to quantify web performance with objective metrics. However, these metrics can only be computed by instrumenting the browser and, thus, are not available to ISPs. We designed PAIN: PAssive INdicator for ISPs. It is an automatic system to monitor the performance of web pages from passive measurements. It is open source and available for download. It leverages only flow-level and DNS measurements which are still possible in the network despite the deployment of HTTPS. With unsupervised learn- ing, PAIN automatically creates a machine learning model from the timeline of requests issued by browsers to render web pages, and uses it to measure web performance in real- time. We compared PAIN to indicators based on in-browser instrumentation and found strong correlations between the approaches. PAIN correctly highlights worsening network conditions and provides visibility into web performance. We let PAIN run on a real ISP network, and found that it is able to pinpoint performance variations across time and groups of users

    Quality in Measurement: Beyond the deployment barrier

    Get PDF
    Network measurement stands at an intersection in the development of the science. We explore possible futures for the area and propose some guidelines for the development of stronger measurement techniques. The paper concludes with a discussion of the work of the NLANR and WAND network measurement groups including the NLANR Network Analysis Infrastructure, AMP, PMA, analysis of Voice over IP traffic and separation of HTTP delays into queuing delay, network latency and server delay

    Monitoring Challenges and Approaches for P2P File-Sharing Systems

    Get PDF
    Since the release of Napster in 1999, P2P file-sharing has enjoyed a dramatic rise in popularity. A 2000 study by Plonka on the University of Wisconsin campus network found that file-sharing accounted for a comparable volume of traffic to HTTP, while a 2002 study by Saroiu et al. on the University of Washington campus network found that file-sharing accounted for more than treble the volume of Web traffic observed, thus affirming the significance of P2P in the context of Internet traffic. Empirical studies of P2P traffic are essential for supporting the design of next-generation P2P systems, informing the provisioning of network infrastructure and underpinning the policing of P2P systems. The latter is of particular significance as P2P file-sharing systems have been implicated in supporting criminal behaviour including copyright infringement and the distribution of illegal pornograph

    Relaxing state-access constraints in stateful programmable data planes

    Get PDF
    Supporting the programming of stateful packet forwarding functions in hardware has recently attracted the interest of the research community. When designing such switching chips, the challenge is to guarantee the ability to program functions that can read and modify data plane's state, while keeping line rate performance and state consistency. Current state-of-the-art designs are based on a very conservative all-or-nothing model: programmability is limited only to those functions that are guaranteed to sustain line rate, with any traffic workload. In effect, this limits the maximum time to execute state update operations. In this paper, we explore possible options to relax these constraints by using simulations on real traffic traces. We then propose a model in which functions can be executed in a larger but bounded time, while preventing data hazards with memory locking. We present results showing that such flexibility can be supported with little or no throughput degradation.Comment: 6 page

    HLOC: Hints-Based Geolocation Leveraging Multiple Measurement Frameworks

    Full text link
    Geographically locating an IP address is of interest for many purposes. There are two major ways to obtain the location of an IP address: querying commercial databases or conducting latency measurements. For structural Internet nodes, such as routers, commercial databases are limited by low accuracy, while current measurement-based approaches overwhelm users with setup overhead and scalability issues. In this work we present our system HLOC, aiming to combine the ease of database use with the accuracy of latency measurements. We evaluate HLOC on a comprehensive router data set of 1.4M IPv4 and 183k IPv6 routers. HLOC first extracts location hints from rDNS names, and then conducts multi-tier latency measurements. Configuration complexity is minimized by using publicly available large-scale measurement frameworks such as RIPE Atlas. Using this measurement, we can confirm or disprove the location hints found in domain names. We publicly release HLOC's ready-to-use source code, enabling researchers to easily increase geolocation accuracy with minimum overhead.Comment: As published in TMA'17 conference: http://tma.ifip.org/main-conference

    Hypersparse Neural Network Analysis of Large-Scale Internet Traffic

    Full text link
    The Internet is transforming our society, necessitating a quantitative understanding of Internet traffic. Our team collects and curates the largest publicly available Internet traffic data containing 50 billion packets. Utilizing a novel hypersparse neural network analysis of "video" streams of this traffic using 10,000 processors in the MIT SuperCloud reveals a new phenomena: the importance of otherwise unseen leaf nodes and isolated links in Internet traffic. Our neural network approach further shows that a two-parameter modified Zipf-Mandelbrot distribution accurately describes a wide variety of source/destination statistics on moving sample windows ranging from 100,000 to 100,000,000 packets over collections that span years and continents. The inferred model parameters distinguish different network streams and the model leaf parameter strongly correlates with the fraction of the traffic in different underlying network topologies. The hypersparse neural network pipeline is highly adaptable and different network statistics and training models can be incorporated with simple changes to the image filter functions.Comment: 11 pages, 10 figures, 3 tables, 60 citations; to appear in IEEE High Performance Extreme Computing (HPEC) 201
    • 

    corecore