60 research outputs found

    The effect of (non-)competing brokers on the quality and price of differentiated internet services

    Full text link
    Price war, as an important factor in undercutting competitors and attracting customers, has spurred considerable work that analyzes such conflict situation. However, in most of these studies, quality of service (QoS), as an important decision-making criterion, has been neglected. Furthermore, with the rise of service-oriented architectures, where players may offer different levels of QoS for different prices, more studies are needed to examine the interaction among players within the service hierarchy. In this paper, we present a new approach to modeling price competition in (virtualized) service-oriented architectures, where there are multiple service levels. In our model, brokers, as intermediaries between end-users and service providers, offer different QoS by adapting the service that they obtain from lower-level providers so as to match the demands of their clients to the services of providers. To maximize profit, players, i.e. providers and brokers, at each level compete in a Bertrand game while they offer different QoS. To maintain an oligopoly market, we then describe underlying dynamics which lead to a Bertrand game with price constraints at the providers’ level. We also study cooperation among a subset of brokers. Numerical simulations demonstrate the behavior of brokers and providers and the effect of price competition on their market shares.Accepted manuscrip

    Subsidization Competition: Vitalizing the Neutral Internet

    Full text link
    Unlike telephone operators, which pay termination fees to reach the users of another network, Internet Content Providers (CPs) do not pay the Internet Service Providers (ISPs) of users they reach. While the consequent cross subsidization to CPs has nurtured content innovations at the edge of the Internet, it reduces the investment incentives for the access ISPs to expand capacity. As potential charges for terminating CPs' traffic are criticized under the net neutrality debate, we propose to allow CPs to voluntarily subsidize the usagebased fees induced by their content traffic for end-users. We model the regulated subsidization competition among CPs under a neutral network and show how deregulation of subsidization could increase an access ISP's utilization and revenue, strengthening its investment incentives. Although the competition might harm certain CPs, we find that the main cause comes from high access prices rather than the existence of subsidization. Our results suggest that subsidization competition will increase the competitiveness and welfare of the Internet content market; however, regulators might need to regulate access prices if the access ISP market is not competitive enough. We envision that subsidization competition could become a viable model for the future Internet

    Network Non-neutrality Debate: An Economic Analysis

    Get PDF
    This paper studies the economic utilities and the quality of service (QoS) in a two-sided non-neutral market where Internet service providers (ISPs) charge content providers (CPs) for the content delivery. We propose new models on a two-sided market which involves a CP, an ISP, end users and advertisers. The CP may have either the subscription revenue model (charging end users) or the advertisement revenue model (charging advertisers). We formulate the interactions between the ISP and the CP as a noncooperative game problem for the former and an optimization problem for the latter. Our analysis shows that the revenue model of the CP plays a significant role in a non-neutral Internet. With the subscription model, both the ISP and the CP receive better (or worse) utilities as well as QoS in the presence of side payment at the same time. However, with the advertisement model, the side payment impedes the CP from investing on its contents.Comment: 15 pages, 10 figure

    Latency-Sensitive Web Service Workflows: A Case for a Software-Defined Internet

    Full text link
    The Internet, at large, remains under the control of service providers and autonomous systems. The Internet of Things (IoT) and edge computing provide an increasing demand and potential for more user control for their web service workflows. Network Softwarization revolutionizes the network landscape in various stages, from building, incrementally deploying, and maintaining the environment. Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) are two core tenets of network softwarization. SDN offers a logically centralized control plane by abstracting away the control of the network devices in the data plane. NFV virtualizes dedicated hardware middleboxes and deploys them on top of servers and data centers as network functions. Thus, network softwarization enables efficient management of the system by enhancing its control and improving the reusability of the network services. In this work, we propose our vision for a Software-Defined Internet (SDI) for latency-sensitive web service workflows. SDI extends network softwarization to the Internet-scale, to enable a latency-aware user workflow execution on the Internet.Comment: Accepted for Publication at The Seventh International Conference on Software Defined Systems (SDS-2020

    Efficient Methods on Reducing Data Redundancy in the Internet

    Get PDF
    The transformation of the Internet from a client-server based paradigm to a content-based one has led to many of the fundamental network designs becoming outdated. The increase in user-generated contents, instant sharing, flash popularity, etc., brings forward the needs for designing an Internet which is ready for these and can handle the needs of the small-scale content providers. The Internet, as of today, carries and stores a large amount of duplicate, redundant data, primarily due to a lack of duplication detection mechanisms and caching principles. This redundancy costs the network in different ways: it consumes energy from the network elements that need to process the extra data; it makes the network caches store duplicate data, thus causing the tail of the data distribution to be swapped out of the caches; and it causes the content-servers to be loaded more as they have to always serve the less popular contents.  In this dissertation, we have analyzed the aforementioned phenomena and proposed several methods to reduce the redundancy of the network at a low cost. The proposals involve different approaches to do so--including data chunk level redundancy detection and elimination, rerouting-based caching mechanisms in information-centric networks, and energy-aware content distribution techniques. Using these approaches, we have demonstrated how we can perform redundancy elimination using a low overhead and low processing power. We have also demonstrated that by using local or global cooperation methods, we can increase the storage efficiency of the existing caches many-fold. In addition to that, this work shows that it is possible to reduce a sizable amount of traffic from the core network using collaborative content download mechanisms, while reducing client devices' energy consumption simultaneously

    Rethinking Routing and Peering in the era of Vertical Integration of Network Functions

    Get PDF
    Content providers typically control the digital content consumption services and are getting the most revenue by implementing an all-you-can-eat model via subscription or hyper-targeted advertisements. Revamping the existing Internet architecture and design, a vertical integration where a content provider and access ISP will act as unibody in a sugarcane form seems to be the recent trend. As this vertical integration trend is emerging in the ISP market, it is questionable if existing routing architecture will suffice in terms of sustainable economics, peering, and scalability. It is expected that the current routing will need careful modifications and smart innovations to ensure effective and reliable end-to-end packet delivery. This involves new feature developments for handling traffic with reduced latency to tackle routing scalability issues in a more secure way and to offer new services at cheaper costs. Considering the fact that prices of DRAM or TCAM in legacy routers are not necessarily decreasing at the desired pace, cloud computing can be a great solution to manage the increasing computation and memory complexity of routing functions in a centralized manner with optimized expenses. Focusing on the attributes associated with existing routing cost models and by exploring a hybrid approach to SDN, we also compare recent trends in cloud pricing (for both storage and service) to evaluate whether it would be economically beneficial to integrate cloud services with legacy routing for improved cost-efficiency. In terms of peering, using the US as a case study, we show the overlaps between access ISPs and content providers to explore the viability of a future in terms of peering between the new emerging content-dominated sugarcane ISPs and the healthiness of Internet economics. To this end, we introduce meta-peering, a term that encompasses automation efforts related to peering – from identifying a list of ISPs likely to peer, to injecting control-plane rules, to continuous monitoring and notifying any violation – one of the many outcroppings of vertical integration procedure which could be offered to the ISPs as a standalone service
    • …
    corecore