248 research outputs found

    Characterizing and Improving the Reliability of Broadband Internet Access

    Full text link
    In this paper, we empirically demonstrate the growing importance of reliability by measuring its effect on user behavior. We present an approach for broadband reliability characterization using data collected by many emerging national initiatives to study broadband and apply it to the data gathered by the Federal Communications Commission's Measuring Broadband America project. Motivated by our findings, we present the design, implementation, and evaluation of a practical approach for improving the reliability of broadband Internet access with multihoming.Comment: 15 pages, 14 figures, 6 table

    A telecom analytics framework for dynamic quality of service management

    Get PDF
    Since the beginning of Internet, Internet Service Providers (ISP) have seen the need of giving to users? traffic different treatments defined by agree- ments between ISP and customers. This procedure, known as Quality of Service Management, has not much changed in the last years (DiffServ and Deep Pack-et Inspection have been the most chosen mechanisms). However, the incremen-tal growth of Internet users and services jointly with the application of recent Ma- chine Learning techniques, open up the possibility of going one step for-ward in the smart management of network traffic. In this paper, we first make a survey of current tools and techniques for QoS Management. Then we intro-duce clustering and classifying Machine Learning techniques for traffic charac-terization and the concept of Quality of Experience. Finally, with all these com-ponents, we present a brand new framework that will manage in a smart way Quality of Service in a telecom Big Data based scenario, both for mobile and fixed communications

    Estimating packet loss rate in the access through application-level measurements

    Get PDF
    End user monitoring of quality of experience is one of the necessary steps to achieve an effective and winning control over network neutrality. The involvement of the end user, however, requires the development of light and user-friendly tools that can be easily run at the application level with limited effort and network resources usage. In this paper, we propose a simple model to estimate packet loss rate perceived by a connection, by round trip time and TCP goodput samples collected at the application level. The model is derived from the well-known Mathis equation, which predicts the bandwidth of a steady-state TCP connection under random losses and delayed ACKs and it is evaluated in a testbed environment under a wide range of different conditions. Experiments are also run on real access networks. We plan to use the model to analyze the results collected by the "network neutrality bot" (Neubot), a research tool that performs application-level network-performance measurements. However, the methodology is easily portable and can be interesting for basically any user application that performs large downloads or uploads and requires to estimate access network quality and its variation

    Smartphone-based crowdsourcing for estimating the bottleneck capacity in wireless networks

    Get PDF
    Crowdsourcing enables the fine-grained characterization and performance evaluation of today׳s large-scale networks using the power of the masses and distributed intelligence. This paper presents SmartProbe, a system that assesses the bottleneck capacity of Internet paths using smartphones, from a mobile crowdsourcing perspective. With SmartProbe measurement activities are more bandwidth efficient compared to similar systems, and a larger number of users can be supported. An application based on SmartProbe is also presented: georeferenced measurements are mapped and used to compare the performance of mobile broadband operators in wide areas. Results from one year of operation are included

    Service-centric networking for distributed heterogeneous clouds

    Get PDF
    Optimal placement and selection of service instances in a distributed heterogeneous cloud is a complex trade-off between application requirements and resource capabilities that requires detailed information on the service, infrastructure constraints, and the underlying IP network. In this article we first posit that from an analysis of a snapshot of today's centralized and regional data center infrastructure, there is a sufficient number of candidate sites for deploying many services while meeting latency and bandwidth constraints. We then provide quantitative arguments why both network and hardware performance needs to be taken into account when selecting candidate sites to deploy a given service. Finally, we propose a novel architectural solution for service-centric networking. The resulting system exploits the availability of fine-grained execution nodes across the Internet and uses knowledge of available computational and network resources for deploying, replicating and selecting instances to optimize quality of experience for a wide range of services
    corecore