583 research outputs found

    MONROE-Nettest: A Configurable Tool for Dissecting Speed Measurements in Mobile Broadband Networks

    Full text link
    As the demand for mobile connectivity continues to grow, there is a strong need to evaluate the performance of Mobile Broadband (MBB) networks. In the last years, mobile "speed", quantified most commonly by data rate, gained popularity as the widely accepted metric to describe their performance. However, there is a lack of consensus on how mobile speed should be measured. In this paper, we design and implement MONROE-Nettest to dissect mobile speed measurements, and investigate the effect of different factors on speed measurements in the complex mobile ecosystem. MONROE-Nettest is built as an Experiment as a Service (EaaS) on top of the MONROE platform, an open dedicated platform for experimentation in operational MBB networks. Using MONROE-Nettest, we conduct a large scale measurement campaign and quantify the effects of measurement duration, number of TCP flows, and server location on measured downlink data rate in 6 operational MBB networks in Europe. Our results indicate that differences in parameter configuration can significantly affect the measurement results. We provide the complete MONROE-Nettest toolset as open source and our measurements as open data.Comment: 6 pages, 3 figures, submitted to INFOCOM CNERT Workshop 201

    Informing protocol design through crowdsourcing measurements

    Get PDF
    Mención Internacional en el título de doctorMiddleboxes, such as proxies, firewalls and NATs play an important role in the modern Internet ecosystem. On one hand, they perform advanced functions, e.g. traffic shaping, security or enhancing application performance. On the other hand, they turn the Internet into a hostile ecosystem for innovation, as they limit the deviation from deployed protocols. It is therefore essential, when designing a new protocol, to first understand its interaction with the elements of the path. The emerging area of crowdsourcing solutions can help to shed light on this issue. Such approach allows us to reach large and different sets of users and also different types of devices and networks to perform Internet measurements. In this thesis, we show how to make informed protocol design choices by expanding the traditional crowdsourcing focus from the human element and using crowdsourcing large scale measurement platforms. We consider specific use cases, namely the case of pervasive encryption in the modern Internet, TCP Fast Open and ECN++. We consider such use cases to advance the global understanding on whether wide adoption of encryption is possible in today’s Internet or the adoption of encryption is necessary to guarantee the proper functioning of HTTP/2. We target ECN and particularly ECN++, given its succession of deployment problems. We then measured ECN deployment over mobile as well as fixed networks. In the process, we discovered some bad news for the base ECN protocol—more than half the mobile carriers we tested wipe the ECN field at the first upstream hop. This thesis also reports the good news that, wherever ECN gets through, we found no deployment problems for the ECN++ enhancement. The thesis includes the results of other more in-depth tests to check whether servers that claim to support ECN, actually respond correctly to explicit congestion feedback, including some surprising congestion behaviour unrelated to ECN. This thesis also explores the possible causes that ossify the modern Internet and make difficult the advancement of the innovation. Network Address Translators (NATs) are a commonplace in the Internet nowadays. It is fair to say that most of the residential and mobile users are connected to the Internet through one or more NATs. As any other technology, NAT presents upsides and downsides. Probably the most acknowledged downside of the NAT technology is that it introduces additional difficulties for some applications such as peer-to-peer applications, gaming and others to function properly. This is partially due to the nature of the NAT technology but also due to the diversity of behaviors of the different NAT implementations deployed in the Internet. Understanding the properties of the currently deployed NAT base provides useful input for application and protocol developers regarding what to expect when deploying new application in the Internet. We develop NATwatcher, a tool to test NAT boxes using a crowdsourcingbased measurement methodology. We also perform large scale active measurement campaigns to detect CGNs in fixed broadband networks using NAT Revelio, a tool we have developed and validated. Revelio enables us to actively determine from within residential networks the type of upstream network address translation, namely NAT at the home gateway (customer-grade NAT) or NAT in the ISP (Carrier Grade NAT). We deploy Revelio in the FCC Measuring Broadband America testbed operated by SamKnows and also in the RIPE Atlas testbed. A part of this thesis focuses on characterizing CGNs in Mobile Network Operators (MNOs). We develop a measuring tool, called CGNWatcher that executes a number of active tests to fully characterize CGN deployments in MNOs. The CGNWatcher tool systematically tests more than 30 behavioural requirements of NATs defined by the Internet Engineering Task Force (IETF) and also multiple CGN behavioural metrics. We deploy CGNWatcher in MONROE and performed large measurement campaigns to characterize the real CGN deployments of the MNOs serving the MONROE nodes. We perform a large measurement campaign using the tools described above, recruiting over 6,000 users, from 65 different countries and over 280 ISPs. We validate our results with the ISPs at the IP level and, reported to the ground truth we collected. To the best of our knowledge, this represents the largest active measurement study of (confirmed) NAT or CGN deployments at the IP level in fixed and mobile networks to date. As part of the thesis, we characterize roaming across Europe. The goal of the experiment was to try to understand if the MNO changes CGN while roaming, for this reason, we run a series of measurements that enable us to identify the roaming setup, infer the network configuration for the 16 MNOs that we measure and quantify the end-user performance for the roaming configurations which we detect. We build a unique roaming measurement platform deployed in six countries across Europe. Using this platform, we measure different aspects of international roaming in 3G and 4G networks, including mobile network configuration, performance characteristics, and content discrimination. We find that operators adopt common approaches to implementing roaming, resulting in additional latency penalties of 60 ms or more, depending on geographical distance. Considering content accessibility, roaming poses additional constraints that leads to only minimal deviations when accessing content in the original country. However, geographical restrictions in the visited country make the picture more complicated and less intuitive. Results included in this thesis would provide useful input for application, protocol designers, ISPs and researchers that aim to make their applications and protocols to work across the modern Internet.Programa de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Gonzalo Camarillo González.- Secretario: María Carmen Guerrero López.- Vocal: Andrés García Saavedr

    Crowdsourced network measurements: Benefits and best practices

    Get PDF
    Network measurements are of high importance both for the operation of networks and for the design and evaluation of new management mechanisms. Therefore, several approaches exist for running network measurements, ranging from analyzing live traffic traces from campus or Internet Service Provider (ISP) networks to performing active measurements on distributed testbeds, e.g., PlanetLab, or involving volunteers. However, each method falls short, offering only a partial view of the network. For instance, the scope of passive traffic traces is limited to an ISP’s network and customers’ habits, whereas active measurements might be biased by the population or node location involved. To complement these techniques, we propose to use (commercial) crowdsourcing platforms for network measurements. They permit a controllable, diverse and realistic view of the Internet and provide better control than do measurements with voluntary participants. In this study, we compare crowdsourcing with traditional measurement techniques, describe possible pitfalls and limitations, and present best practices to overcome these issues. The contribution of this paper is a guideline for researchers to understand when and how to exploit crowdsourcing for network measurements

    Smartphone-based crowdsourcing for estimating the bottleneck capacity in wireless networks

    Get PDF
    Crowdsourcing enables the fine-grained characterization and performance evaluation of today׳s large-scale networks using the power of the masses and distributed intelligence. This paper presents SmartProbe, a system that assesses the bottleneck capacity of Internet paths using smartphones, from a mobile crowdsourcing perspective. With SmartProbe measurement activities are more bandwidth efficient compared to similar systems, and a larger number of users can be supported. An application based on SmartProbe is also presented: georeferenced measurements are mapped and used to compare the performance of mobile broadband operators in wide areas. Results from one year of operation are included

    Passport: Enabling Accurate Country-Level Router Geolocation using Inaccurate Sources

    Full text link
    When does Internet traffic cross international borders? This question has major geopolitical, legal and social implications and is surprisingly difficult to answer. A critical stumbling block is a dearth of tools that accurately map routers traversed by Internet traffic to the countries in which they are located. This paper presents Passport: a new approach for efficient, accurate country-level router geolocation and a system that implements it. Passport provides location predictions with limited active measurements, using machine learning to combine information from IP geolocation databases, router hostnames, whois records, and ping measurements. We show that Passport substantially outperforms existing techniques, and identify cases where paths traverse countries with implications for security, privacy, and performance

    Passport: enabling accurate country-level router geolocation using inaccurate sources

    Full text link
    When does Internet traffic cross international borders? This question has major geopolitical, legal and social implications and is surprisingly difficult to answer. A critical stumbling block is a dearth of tools that accurately map routers traversed by Internet traffic to the countries in which they are located. This paper presents Passport: a new approach for efficient, accurate country-level router geolocation and a system that implements it. Passport provides location predictions with limited active measurements, using machine learning to combine information from IP geolocation databases, router hostnames, whois records, and ping measurements. We show that Passport substantially outperforms existing techniques, and identify cases where paths traverse countries with implications for security, privacy, and performance.First author draf

    Macro- and microscopic analysis of the internet economy from network measurements

    Get PDF
    Tesi per compendi de publicacions.The growth of the Internet impacts multiple areas of the world economy, and it has become a permanent part of the economic landscape both at the macro- and at microeconomic level. On-line traffic and information are currently assets with large business value. Even though commercial Internet has been a part of our lives for more than two decades, its impact on global, and everyday, economy still holds many unknowns. In this work we analyse important macro- and microeconomic aspects of the Internet. First we investigate the characteristics of the interdomain traffic, which is an important part of the macroscopic economy of the Internet. Finally, we investigate the microeconomic phenomena of price discrimination in the Internet. At the macroscopic level, we describe quantitatively the interdomain traffic matrix (ITM), as seen from the perspective of a large research network. The ITM describes the traffic flowing between autonomous systems (AS) in the Internet. It depicts the traffic between the largest Internet business entities, therefore it has an important impact on the Internet economy. In particular, we analyse the sparsity and statistical distribution of the traffic, and observe that the shape of the statistical distribution of the traffic sourced from an AS might be related to congestion within the network. We also investigate the correlations between rows in the ITM. Finally, we propose a novel method to model the interdomain traffic, that stems from first-principles and recognizes the fact that the traffic is a mixture of different Internet applications, and can have regional artifacts. We present and evaluate a tool to generate such matrices from open and available data. Our results show that our first-principles approach is a promising alternative to the existing solutions in this area, which enables the investigation of what-if scenarios and their impact on the Internet economy. At the microscopic level, we investigate the rising phenomena of price discrimination (PD). We find empirical evidences that Internet users can be subject to price and search discrimination. In particular, we present examples of PD on several ecommerce websites and uncover the information vectors facilitating PD. Later we show that crowd-sourcing is a feasible method to help users to infer if they are subject to PD. We also build and evaluate a system that allows any Internet user to examine if she is subject to PD. The system has been deployed and used by multiple users worldwide, and uncovered more examples of PD. The methods presented in the following papers are backed with thorough data analysis and experiments.Internet es hoy en día un elemento crucial en la economía mundial, su constante crecimiento afecta directamente múltiples aspectos tanto a nivel macro- como a nivel microeconómico. Entre otros aspectos, el tráfico de red y la información que transporta se han convertido en un producto de gran valor comercial para cualquier empresa. Sin embargo, más de dos decadas después de su introducción en nuestras vidas y siendo un elemento de vital importancia, el impacto de Internet en la economía global y diaria es un tema que alberga todavía muchas incógnitas que resolver. En esta disertación analizamos importantes aspectos micro y macroeconómicos de Internet. Primero, investigamos las características del tráfico entre Sistemas Autónomos (AS), que es un parte decisiva de la macroeconomía de Internet. A continuacin, estudiamos el controvertido fenómeno microeconómico de la discriminación de precios en Internet. A nivel macroeconómico, mostramos cuantitatívamente la matriz del tráfico entre AS ("Interdomain Traffic Matrix - ITM"), visto desde la perspectiva de una gran red científica. La ITM obtenida empíricamente muestra la cantidad de tráfico compartido entre diferentes AS, las entidades más grandes en Internet, siendo esto uno de los principales aspectos a evaluar en la economiá de Internet. Esto nos permite por ejemplo, analizar diferentes propiedades estadísticas del tráfico para descubrir si la distribución del tráfico producido por un AS está directamente relacionado con la congestión dentro de la red. Además, este estudio también nos permite investigar las correlaciones entre filas de la ITM, es decir, entre diferentes AS. Por último, basándonos en el estudio empírico, proponemos una innovadora solución para modelar el tráfico en una ITM, teniendo en cuenta que el tráfico modelado es dependiente de las particularidades de cada escenario (e.g., distribución de apliaciones, artefactos). Para obtener resultados representativos, la herramienta propuesta para crear estas matrices es evaluada a partir de conjuntos de datos abiertos, disponibles para toda la comunidad científica. Los resultados obtenidos muestran que el método propuesto es una prometedora alternativa a las soluciones de la literatura. Permitiendo así, la nueva investigación de escenarios desconocidos y su impacto en la economía de Internet. A nivel microeconómico, en esta tesis investigamos el fenómeno de la discriminación de precios en Internet ("price discrimination" - PD). Nuestros estudios permiten mostrar pruebas empíricas de que los usuarios de Internet están expuestos a discriminación de precios y resultados de búsquedas. En particular, presentamos ejemplos de PD en varias páginas de comercio electrónico y descubrimos que informacin usan para llevarlo a cabo. Posteriormente, mostramos como una herramienta crowdsourcing puede ayudar a la comunidad de usuarios a inferir que páginas aplican prácticas de PD. Con el objetivo de mitigar esta cada vez más común práctica, publicamos y evaluamos una herramienta que permite al usuario deducir si está siendo víctima de PD. Esta herramienta, con gran repercusión mediática, ha sido usada por multitud de usuarios alrededor del mundo, descubriendo así más ejemplos de discriminación. Por último remarcar que todos los metodos presentados en esta disertación están respaldados por rigurosos análisis y experimentos.Postprint (published version
    • …
    corecore