109 research outputs found

    Challenges of future multimedia QoE monitoring for internet service providers

    Get PDF
    The ever-increasing network traffic and user expectations at reduced cost make the delivery of high Quality of Experience (QoE) for multimedia services more vital than ever in the eyes of Internet Service Providers (ISPs). Real-time quality monitoring, with a focus on the user, has become essential as the first step in cost-effective provisioning of high quality services. With the recent changes in the perception of user privacy, the rising level of application-layer encryption and the introduction and deployment of virtualized networks, QoE monitoring solutions need to be adapted to the fast changing Internet landscape. In this contribution, we provide an overview of state-of-the-art quality monitoring models and probing technologies, and highlight the major challenges ISPs have to face when they want to ensure high service quality for their customers

    Machine Learning and Big Data Methodologies for Network Traffic Monitoring

    Get PDF
    Over the past 20 years, the Internet saw an exponential grown of traffic, users, services and applications. Currently, it is estimated that the Internet is used everyday by more than 3.6 billions users, who generate 20 TB of traffic per second. Such a huge amount of data challenge network managers and analysts to understand how the network is performing, how users are accessing resources, how to properly control and manage the infrastructure, and how to detect possible threats. Along with mathematical, statistical, and set theory methodologies machine learning and big data approaches have emerged to build systems that aim at automatically extracting information from the raw data that the network monitoring infrastructures offer. In this thesis I will address different network monitoring solutions, evaluating several methodologies and scenarios. I will show how following a common workflow, it is possible to exploit mathematical, statistical, set theory, and machine learning methodologies to extract meaningful information from the raw data. Particular attention will be given to machine learning and big data methodologies such as DBSCAN, and the Apache Spark big data framework. The results show that despite being able to take advantage of mathematical, statistical, and set theory tools to characterize a problem, machine learning methodologies are very useful to discover hidden information about the raw data. Using DBSCAN clustering algorithm, I will show how to use YouLighter, an unsupervised methodology to group caches serving YouTube traffic into edge-nodes, and latter by using the notion of Pattern Dissimilarity, how to identify changes in their usage over time. By using YouLighter over 10-month long races, I will pinpoint sudden changes in the YouTube edge-nodes usage, changes that also impair the end users’ Quality of Experience. I will also apply DBSCAN in the deployment of SeLINA, a self-tuning tool implemented in the Apache Spark big data framework to autonomously extract knowledge from network traffic measurements. By using SeLINA, I will show how to automatically detect the changes of the YouTube CDN previously highlighted by YouLighter. Along with these machine learning studies, I will show how to use mathematical and set theory methodologies to investigate the browsing habits of Internauts. By using a two weeks dataset, I will show how over this period, the Internauts continue discovering new websites. Moreover, I will show that by using only DNS information to build a profile, it is hard to build a reliable profiler. Instead, by exploiting mathematical and statistical tools, I will show how to characterize Anycast-enabled CDNs (A-CDNs). I will show that A-CDNs are widely used either for stateless and stateful services. That A-CDNs are quite popular, as, more than 50% of web users contact an A-CDN every day. And that, stateful services, can benefit of A-CDNs, since their paths are very stable over time, as demonstrated by the presence of only a few anomalies in their Round Trip Time. Finally, I will conclude by showing how I used BGPStream an open-source software framework for the analysis of both historical and real-time Border Gateway Protocol (BGP) measurement data. By using BGPStream in real-time mode I will show how I detected a Multiple Origin AS (MOAS) event, and how I studies the black-holing community propagation, showing the effect of this community in the network. Then, by using BGPStream in historical mode, and the Apache Spark big data framework over 16 years of data, I will show different results such as the continuous growth of IPv4 prefixes, and the growth of MOAS events over time. All these studies have the aim of showing how monitoring is a fundamental task in different scenarios. In particular, highlighting the importance of machine learning and of big data methodologies

    The Economics of Net Neutrality: Implications of Priority Pricing in Access Networks

    Get PDF
    This work systematically analyzes Net Neutrality from an economic point of view. To this end a framework is developed which helps to structure the Net Neutrality debate. Furthermore, the introduction of prioritization is studied by analyzing potential effects of Quality of Service (QoS) on Content and Service Providers (CSPs) and Internet Users (IUs)

    Alternative revenue sources for Internet service providers

    Get PDF
    The Internet has evolved from a small research network towards a large globally interconnected network. The deregulation of the Internet attracted commercial entities to provide various network and application services for profit. While Internet Service Providers (ISPs) offer network connectivity services, Content Service Providers (CSPs) offer online contents and application services. Further, the ISPs that provide transit services to other ISPs and CSPs are known as transit ISPs. The ISPs that provide Internet connections to end users are known as access ISPs. Though without a central regulatory body for governing, the Internet is growing through complex economic cooperation between service providers that also compete with each other for revenues. Currently, CSPs derive high revenues from online advertising that increase with content popularity. On other hand, ISPs face low transit revenues, caused by persistent declines in per-unit traffic prices, and rising network costs fueled by increasing traffic volumes. In this thesis, we analyze various approaches by ISPs for sustaining their network infrastructures by earning extra revenues. First, we study the economics of traffic attraction by ISPs to boost transit revenues. This study demonstrates that traffic attraction and reaction to it redistribute traffic on links between Autonomous Systems (ASes) and create camps of winning, losing and neutral ASes with respect to changes in transit payments. Despite various countermeasures by losing ASes, the traffic attraction remains effective unless ASes from the winning camp cooperate with the losing ASes. While our study shows that traffic attraction has a solid potential to increase revenues for transit ISPs, this source of revenues might have negative reputation and legal consequences for the ISPs. Next, we look at hosting as an alternative source of revenues and examine hosting of online contents by transit ISPs. Using real Internet-scale measurements, this work reports a pervasive trend of content hosting throughout the transit hierarchy, validating the hosting as a prominent source of revenues for transit ISPs. In our final work, we consider a model where access ISPs derive extra revenues from online advertisements (ads). Our analysis demonstrates that the ad-based revenue model opens a significant revenue potential for access ISPs, suggesting its economic viability.This work has been supported by IMDEA Networks Institute.Programa Oficial de Doctorado en Ingeniería TelemåticaPresidente: Jordi Domingo-Pascual.- Vocal: Víctor López Álvarez.-Secretario: Alberto García Martíne

    A network paradigm for very high capacity mobile and fixed telecommunications ecosystem sustainable evolution

    Full text link
    For very high capacity networks (VHC), the main objective is to improve the quality of the end-user experience. This implies compliance with key performance indicators (KPIs) required by applications. Key performance indicators at the application level are throughput, download time, round trip time, and video delay. They depend on the end-to-end connection between the server and the end-user device. For VHC networks, Telco operators must provide the required application quality. Moreover, they must meet the objectives of economic sustainability. Today, Telco operators rarely achieve the above objectives, mainly due to the push to increase the bit-rate of access networks without considering the end-to-end KPIs of the applications. The main contribution of this paper concerns the definition of a deployment framework to address performance and cost issues for VHC networks. We show three actions on which it is necessary to focus. First, limiting bit-rate through video compression. Second, contain the rate of packet loss through artificial intelligence algorithms for line stabilization. Third, reduce latency (i.e., round-trip time) with edge-cloud computing. The concerted and gradual application of these measures can allow a Telco to get out of the ultra-broadband "trap" of the access network, as defined in the paper. We propose to work on end-to-end optimization of the bandwidth utilization ratio. This leads to a better performance experienced by the end-user. It also allows a Telco operator to create new business models and obtain new revenue streams at a sustainable cost. To give a clear example, we describe how to realize mobile virtual and augmented reality, which is one of the most challenging future services.Comment: 42 pages, 4 tables, 6 figures. v2: Revised Englis

    Measuring named data networks

    Get PDF
    2020 Spring.Includes bibliographical references.Named Data Networking (NDN) is a promising information-centric networking (ICN) Internet architecture that addresses the content directly rather than addressing servers. NDN provides new features, such as content-centric security, stateful forwarding, and in-network caches, to better satisfy the needs of today's applications. After many years of technological research and experimentation, the community has started to explore the deployment path for NDN. One NDN deployment challenge is measurement. Unlike IP, which has a suite of measurement approaches and tools, NDN only has a few achievements. NDN routing and forwarding are based on name prefixes that do not refer to individual endpoints. While rich NDN functionalities facilitate data distribution, they also break the traditional end-to-end probing based measurement methods. In this dissertation, we present our work to investigate NDN measurements and fill some research gaps in the field. Our thesis of this dissertation states that we can capture a substantial amount of useful and actionable measurements of NDN networks from end hosts. We start by comparing IP and NDN to propose a conceptual framework for NDN measurements. We claim that NDN can be seen as a superset of IP. NDN supports similar functionalities provided by IP, but it has unique features to facilitate data retrieval. The framework helps identify that NDN lacks measurements in various aspects. This dissertation focuses on investigating the active measurements from end hosts. We present our studies in two directions to support the thesis statement. We first present the study to leverage the similarities to replicate IP approaches in NDN networks. We show the first work to measure the NDN-DPDK forwarder, a high-speed NDN forwarder designed and implemented by the National Institute of Standards and Technology (NIST), in a real testbed. The results demonstrate that Data payload sizes dominate the forwarding performance, and efficiently using every fragment to improve the goodput. We then present the first work to replicate packet dispersion techniques in NDN networks. Based on the findings in the NDN-DPDK forwarder benchmark, we devise the techniques to measure interarrivals for Data packets. The results show that the techniques successfully estimate the capacity on end hosts when 1Gbps network cards are used. Our measurements also indicate the NDN-DPDK forwarder introduces variance in Data packet interarrivals. We identify the potential bottlenecks and the possible causes of the variance. We then address the NDN specific measurements, measuring the caching state in NDN networks from end hosts. We propose a novel method to extract fingerprints for various caching decision mechanisms. Our simulation results demonstrate that the method can detect caching decisions in a few rounds. We also show that the method is not sensitive to cross-traffic and can be deployed on real topologies for caching policy detection

    Comparative Modalities of Network Neutrality

    Get PDF
    This project examines the ongoing debate over internet content discrimination, more commonly referred to as network neutrality. It offers a new approach to examining this issue by combining a critical, political economy approach with Lawrence Lessig’s four modalities of regulation: policy, architecture, markets, and norms. It presents a critical, comparative case study analysis of how architecture, markets and norms have shaped United States policy along with comparative examples from select international case studies facing similar regulatory issues. Its findings suggest that while each of the four modalities plays a significant role in the regulation and persistence of network neutrality, there is a need for more clear, robust policy measures to address content discrimination online. Based on these analyses, the author offers policy recommendations for future network neutrality regulation

    The Internet as recommendation engine : implications of online behavioral targeting

    Get PDF
    Thesis (S.M. in Technology and Policy)--Massachusetts Institute of Technology, Engineering Systems Division, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 70-72).This thesis discusses the economic implications of Internet behavioral advertising, which targets ads to individuals based on extensive detailed data about the specific websites users have visited. Previous literature on behavioral advertising has focused almost exclusively on privacy issues; there has been less study of how it might affect industry structure. This thesis examines which parties in the online advertising value chain would benefit the most from the demand for detailed behavioral data; in particular, it examines whether aggregators (such as advertising networks) that track behavior across a large number of websites would derive the greatest benefit. Qualitative stakeholder analysis is used to identify the strengths and weaknesses of several categories of actors: advertisers, advertising agencies, publishers, advertising networks, advertising exchanges, Internet service providers, and users. Advertising agencies might attempt to bypass networks and work directly with publishers, becoming aggregators in their own right. Publishers might need to become interactive "information experiences" in order to collect valuable behavioral data. Users might demand more transparency about what is happening with their data, or even more control over the data collection process. Overall, agencies, networks, and advertising exchanges appear to be in the best position; publishers are faced with a harder task. Furthermore, behavioral targeting may not result in a dramatic increase in overall online advertising spending.by Anthony N. Smith-Grieco.S.M.in Technology and Polic
    • 

    corecore