7,677 research outputs found

    Service Migration from Cloud to Multi-tier Fog Nodes for Multimedia Dissemination with QoE Support.

    Get PDF
    A wide range of multimedia services is expected to be offered for mobile users via various wireless access networks. Even the integration of Cloud Computing in such networks does not support an adequate Quality of Experience (QoE) in areas with high demands for multimedia contents. Fog computing has been conceptualized to facilitate the deployment of new services that cloud computing cannot provide, particularly those demanding QoE guarantees. These services are provided using fog nodes located at the network edge, which is capable of virtualizing their functions/applications. Service migration from the cloud to fog nodes can be actuated by request patterns and the timing issues. To the best of our knowledge, existing works on fog computing focus on architecture and fog node deployment issues. In this article, we describe the operational impacts and benefits associated with service migration from the cloud to multi-tier fog computing for video distribution with QoE support. Besides that, we perform the evaluation of such service migration of video services. Finally, we present potential research challenges and trends

    Understanding Internet topology: principles, models, and validation

    Get PDF
    Building on a recent effort that combines a first-principles approach to modeling router-level connectivity with a more pragmatic use of statistics and graph theory, we show in this paper that for the Internet, an improved understanding of its physical infrastructure is possible by viewing the physical connectivity as an annotated graph that delivers raw connectivity and bandwidth to the upper layers in the TCP/IP protocol stack, subject to practical constraints (e.g., router technology) and economic considerations (e.g., link costs). More importantly, by relying on data from Abilene, a Tier-1 ISP, and the Rocketfuel project, we provide empirical evidence in support of the proposed approach and its consistency with networking reality. To illustrate its utility, we: 1) show that our approach provides insight into the origin of high variability in measured or inferred router-level maps; 2) demonstrate that it easily accommodates the incorporation of additional objectives of network design (e.g., robustness to router failure); and 3) discuss how it complements ongoing community efforts to reverse-engineer the Internet

    Antitrust Analysis for the Internet Upstream Market: a BGP Approach

    Get PDF
    In this paper we study concentration in the European Internet upstream access market. Measurement of market concentration depends on correctly defining the market, but this is not always possible as Antitrust authorities often lack reliable pricing and traffic data. We present an alternative approach based on the inference of the Internet Operators interconnection policies using micro-data sourced from their Border Gateway Protocol tables. Firstly we propose a price-independent algorithm for defining both the vertical and geographical relevant market boundaries, then we calculate market concentration indexes using two novel metrics. These assess, for each undertaking, both its role in terms of essential network facility and of wholesale market dominance. The results, applied to four leading Internet Exchange Points in London, Amsterdam, Frankfurt and Milan, show that some vertical segments of these markets are extremely competitive, while others are highly concentrated, putting them within the special attention category of the Merger Guidelines

    An overview of ADSL homed nepenthes honeypots in Western Australia

    Get PDF
    This paper outlines initial analysis from research in progress into ADSL homed Nepenthes honeypots. One of the Nepenthes honeypots prime objective in this research was the collection of malware for analysis and dissection. A further objective is the analysis of risks that are circulating within ISP networks in Western Australian. What differentiates Nepenthes from many traditional honeypot designs it that is has been engineered from a distributed network philosophy. The program allows distribution of results across a network of sensors and subsequent aggregation of malware statistics readily within a large network environment

    Cost-aware caching: optimizing cache provisioning and object placement in ICN

    Full text link
    Caching is frequently used by Internet Service Providers as a viable technique to reduce the latency perceived by end users, while jointly offloading network traffic. While the cache hit-ratio is generally considered in the literature as the dominant performance metric for such type of systems, in this paper we argue that a critical missing piece has so far been neglected. Adopting a radically different perspective, in this paper we explicitly account for the cost of content retrieval, i.e. the cost associated to the external bandwidth needed by an ISP to retrieve the contents requested by its customers. Interestingly, we discover that classical cache provisioning techniques that maximize cache efficiency (i.e., the hit-ratio), lead to suboptimal solutions with higher overall cost. To show this mismatch, we propose two optimization models that either minimize the overall costs or maximize the hit-ratio, jointly providing cache sizing, object placement and path selection. We formulate a polynomial-time greedy algorithm to solve the two problems and analytically prove its optimality. We provide numerical results and show that significant cost savings are attainable via a cost-aware design

    Characterizing and Improving the Reliability of Broadband Internet Access

    Full text link
    In this paper, we empirically demonstrate the growing importance of reliability by measuring its effect on user behavior. We present an approach for broadband reliability characterization using data collected by many emerging national initiatives to study broadband and apply it to the data gathered by the Federal Communications Commission's Measuring Broadband America project. Motivated by our findings, we present the design, implementation, and evaluation of a practical approach for improving the reliability of broadband Internet access with multihoming.Comment: 15 pages, 14 figures, 6 table
    corecore