156 research outputs found

    Optimal Pricing and Capacity Allocation in Vertically Differentiated Web Caching Services

    Get PDF
    Internet infrastructure is a key enabler of e-business. The infrastructure consists of backbone networks (such as UUNET and AT&T), access networks (such as AOL and Earthlink), content delivery networks (CDNs, such as Akamai) and other caching service providers. Together, all of the players make up the digital supply chain for information goods. Caches provisioned by CDNs and other entities are the storage centers, the digital equivalent of warehouses. These caches store and deliver information from the edge of the network and serve to stabilize and add efficiency to content delivery. While the benefits of caching to content providers with regard to scaling content delivery globally, reducing bandwidth costs and response times are well recognized, caching has not become pervasive. This is largely due to misaligned incentives in the delivery chain. Much of the work done to date on Web caching has focused on the technology to provision quality of service and has not dealt with issues of fundamental importance to the business of provisioning caching services, specifically, the design of incentive compatible services, appropriate pricing schemes, and associated resource allocation issues that arise in operating a caching service. We discuss the design of incentive compatible caching services that we refer to as quality of service caching. Pricing plays an important role in aligning the incentives. We develop an analytic model to study the IAPís optimal pricing and capacity allocation policies

    Pricing and Resource Allocation in Caching Services With Multiple Levels of Quality of Service

    Get PDF
    Network caches are the storage centers in the supply chain for content delivery—the digital equivalent of warehouses. Operated by access networks and other operators, they provide benefits to content publishers in the forms of bandwidth cost reduction, response time improvement, and handling of flash crowds. Yet, caching has not been fully embraced by publishers, because its use can interfere with site personalization strategies and/or collection of visitor information for business intelligence purposes. While recent work has focused on technological solutions to these issues, this paper provides the first study of the managerial issues related to the design and provisioning of incentive-compatible caching services. Starting with a single class of caching service, we find conditions under which the profit-maximizing cache operator should offer the service for free. This occurs when the access networks’ bandwidth costs are high and a large fraction of content publishers value personalization and business intelligence. Some publishers will still opt out of the service, i.e., cache bust, as observed in practice. We next derive the conditions under which the profit-maximizing cache operator should provision two vertically differentiated service classes, namely, premium and best effort. Interestingly, caching service differentiation is different from traditional vertical differentiation models, in that the premium and best-effort market segments do not abut. Thus, optimal prices for the two service classes can be set independently and cannibalization does not occur. It is possible for the cache operator to continue to offer the best-effort service for free while charging for the premium service. Furthermore, consumers are better off because more content is cached and delivered faster to them. Finally, we find that declining bandwidth costs will put negative pressure on cache operator profits, unless consumer adoption of broadband connectivity and the availability of multimedia content provide the necessary increase in traffic volume for the caches

    A Stock Options Metaphor for Content Delivery Networks

    Full text link
    The concept of Stock Options is used to address the scarcity of resources, not adequately addressed by the previous tools of our Prediction Mechanism. Using a Predictive Reservation Scheme, network and disk resources are being monitored through well-established techniques (Kernel Regression Estimators) in a given time frame. Next, an Secondary Market mechanism significantly improves the efficiency and robustness of our Predictive Reservation Scheme by allowing the fast exchange of unused (remaining) resources between the Origin Servers (CDN Clients). This exchange can happen, either by implementing socially optimal practices or by allowing automatic electronic auctions at the end of the day or at shorter time intervals. Finally, we further enhance our Prediction Mechanism; Stock Options are obtained and exercised, depending on the lack of resources at the end of day. As a result, Origin Servers may acquire resources (if required) at a normal price. The effectiveness of our mechanism further improves.Comment: 35 pages, 13 figure

    The growing complexity of content delivery networks: Challenges and implications for the Internet ecosystem

    Get PDF
    Since the commercialization of the Internet, content and related applications, including video streaming, news, advertisements, and social interaction have moved online. It is broadly recognized that the rise of all of these different types of content (static and dynamic, and increasingly multimedia) has been one of the main forces behind the phenomenal growth of the Internet, and its emergence as essential infrastructure for how individuals across the globe gain access to the content sources they want. To accelerate the delivery of diverse content in the Internet and to provide commercial-grade performance for video delivery and the Web, Content Delivery Networks (CDNs) were introduced. This paper describes the current CDN ecosystem and the forces that have driven its evolution. We outline the different CDN architectures and consider their relative strengths and weaknesses. Our analysis highlights the role of location, the growing complexity of the CDN ecosystem, and their relationship to and implications for interconnection markets.EC/H2020/679158/EU/Resolving the Tussle in the Internet: Mapping, Architecture, and Policy Making/ResolutioNe

    Linear Scalability of Distributed Applications

    Get PDF
    The explosion of social applications such as Facebook, LinkedIn and Twitter, of electronic commerce with companies like Amazon.com and Ebay.com, and of Internet search has created the need for new technologies and appropriate systems to manage effectively a considerable amount of data and users. These applications must run continuously every day of the year and must be capable of surviving sudden and abrupt load increases as well as all kinds of software, hardware, human and organizational failures. Increasing (or decreasing) the allocated resources of a distributed application in an elastic and scalable manner, while satisfying requirements on availability and performance in a cost-effective way, is essential for the commercial viability but it poses great challenges in today's infrastructures. Indeed, Cloud Computing can provide resources on demand: it now becomes easy to start dozens of servers in parallel (computational resources) or to store a huge amount of data (storage resources), even for a very limited period, paying only for the resources consumed. However, these complex infrastructures consisting of heterogeneous and low-cost resources are failure-prone. Also, although cloud resources are deemed to be virtually unlimited, only adequate resource management and demand multiplexing can meet customer requirements and avoid performance deteriorations. In this thesis, we deal with adaptive management of cloud resources under specific application requirements. First, in the intra-cloud environment, we address the problem of cloud storage resource management with availability guarantees and find the optimal resource allocation in a decentralized way by means of a virtual economy. Data replicas migrate, replicate or delete themselves according to their economic fitness. Our approach responds effectively to sudden load increases or failures and makes best use of the geographical distance between nodes to improve application-specific data availability. We then propose a decentralized approach for adaptive management of computational resources for applications requiring high availability and performance guarantees under load spikes, sudden failures or cloud resource updates. Our approach involves a virtual economy among service components (similar to the one among data replicas) and an innovative cascading scheme for setting up the performance goals of individual components so as to meet the overall application requirements. Our approach manages to meet application requirements with the minimum resources, by allocating new ones or releasing redundant ones. Finally, as cloud storage vendors offer online services at different rates, which can vary widely due to second-degree price discrimination, we present an inter-cloud storage resource allocation method to aggregate resources from different storage vendors and provide to the user a system which guarantees the best rate to host and serve its data, while satisfying the user requirements on availability, durability, latency, etc. Our system continuously optimizes the placement of data according to its type and usage pattern, and minimizes migration costs from one provider to another, thereby avoiding vendor lock-in

    A Broadband Access Market Framework: Towards Consumer Service Level Agreements

    Get PDF
    Ubiquitous broadband access is considered by many to be necessary for the Internet to realize its full potential. But there is no generally accepted definition of what constitutes broadband access. Furthermore, there is only limited understanding of how the quality of end-to-end broadband Internet services might be assured in today?s nascent multi-service, multi-provider environment. The absence of generally accepted and standardized service definitions and mechanisms for assuring service quality is a significant barrier to competitive broadband access markets. In the business data services market and in the core of the Internet, this problem has been addressed, in part, by increased reliance on Service Level Agreements (SLAs). These SLAs provide a mechanism for service providers and customers to flexibly specify the quality of service (QoS) that will be delivered. When used in conjunction with the new standards-based technical solutions for implementing QoS, these SLAs are helping to facilitate the development of robust wholesale markets for backbone transport services and content delivery services for commercial customers. The emergence of bandwidth traders, brokers, and exchanges provide an institutional and market-based framework to support effective competition

    I own the pipes, you call the tune? The net neutrality debate and its (ir)relevance for Europe

    Get PDF
    1The debate of the so-called “net neutrality” has been under the spotlight in the US for many years, whereas many believed it would not become an issue in Europe. However, over the past few months the need to revise the current regulatory framework to encourage investment in all-IP networks has led to greater attention for net neutrality and its consequences for investment and competition. After the Commission adopted a “light-touch” approach to the issue at the end of 2007, the European Parliament has started to reconsider the issue, and it is reportedly considering a move towards more pro-neutrality rules. This paper summarises the main issues at hand in the net neutrality debate and the views expressed by advocates and opponents of the neutrality principle. The problem is described from a multi-sided market perspective, stressing the role of network operators as intermediaries in the “layered” architecture of all-IP networks. Finally, the paper discusses whether the European regulatory framework and its interaction with ex post competition policy are likely to solve many of the concerns of net neutrality advocates without any need for ad hoc regulation; and whether currently proposed solutions are likely to prove welfare-enhancing and conducive to a better regulatory environment for future e-communications.openopenRENDA A.Renda, Andre

    Networks Effects, Nonlinear Pricing and Entry Deterrence

    Get PDF
    A number of technology products display positive network effects, and are used in variable quantities by heterogeneous customers. Examples include operating systems, infrastructure and back-end software, web services and networking equipment. This paper studies optimal nonlinear pricing for such products, under incomplete information, and with the threat of competitive entry. Both homogeneous and heterogeneous network effects are modeled. Conditions under which a fulfilled-expectations contract exists and is unique are established. While network effects generally raise price, it is shown that accompanying changes in consumption depend on the nature of the network effects - in some cases, it is optimal for the monopolist to induce no changes in usage across customers, while in others cases, network effects raise the usage of all market participants. Optimal pricing is shown to include quantity discounts that increase with usage, and may also involve a nonlinear two-part tariff. These results highlight the impact of network effects on trade-offs between price discrimination and value creation, and have important managerial implications for pricing policy in technology markets. The need to deter competitive entry generally lowers profits for the monopolist, and increases customer surplus. When network effects are homogeneous across customers, the resulting entry-deterring monopoly contract is a fixed fee and results in the socially optimal outcome. However, when the magnitude of heterogeneous network effects is relatively high, there are no changes in total surplus induced by the entry threat, and the price changes merely cause a transfer of value from the seller to its customers. The presence of network effects, and of a credible entry threat, are also shown to increase distributional efficiency by reducing the disparity in relative value captured by different customer types. Regulatory and policy implications of these results are discussed.Information Systems Working Papers Serie
    • 

    corecore