9 research outputs found

    On the Design and Analysis of Incentive Mechanisms in Network Science

    Get PDF
    With the rapid development of communication, computing and signal processing technologies, the last decade has witnessed a proliferation of emerging networks and systems, examples of which can be found in a wide range of domains from online social networks like Facebook or Twitter to crowdsourcing sites like Amazon Mechanical Turk or Topcoder; to online question and answering (Q&A) sites like Quora or Stack Overflow; all the way to new paradigms of traditional systems like cooperative communication networks and smart grid. Different from tradition networks and systems where uses are mandated by fixed and predetermined rules, users in these emerging networks have the ability to make intelligent decisions and their interactions are self-enforcing. Therefore, to achieve better system-wide performance, it is important to design effective incentive mechanisms to stimulate desired user behaviors. This dissertation contributes to the study of incentive mechanisms by developing game-theoretic frameworks to formally analyze strategic user behaviors in a network and systematically design incentive mechanisms to achieve a wide range of system objectives. In this dissertation, we first consider cooperative communication networks and propose a reputation based incentive mechanism to enforce cooperation among self-interested users. We analyze the proposed mechanism using indirect reciprocity game and theoretically demonstrate the effectiveness of reputation in cooperation stimulation. Second, we propose a contract-based mechanism to incentivize a large group of self-interested electric vehicles that have various preferences to act coordinately to provide ancillary services to the power grid. We derive the optimal contract that maximizes the system designer's profits and propose an online learning algorithm to effectively learn the optimal contract. Third, we study the quality control problem for microtask crowdsourcing from the perspective of incentives. After analyzing two widely adopted incentive mechanisms and showing their limitations, we propose a cost-effective incentive mechanism that can be employed to obtain high quality solutions from self-interested workers and ensure the budget constraint of requesters at the same time. Finally, we consider social computing systems where the value is created by voluntary user contributions and understanding how user participate is of key importance. We develop a game-theoretic framework to formally analyze the sequential decision makings of strategic users under the presence of complex externality. It is shown that our analysis is consistent with observations made from real-word user behavior data and can be applied to guide the design of incentive mechanisms in practice

    Telecommunication Economics

    Get PDF
    This book constitutes a collaborative and selected documentation of the scientific outcome of the European COST Action IS0605 Econ@Tel "A Telecommunications Economics COST Network" which run from October 2007 to October 2011. Involving experts from around 20 European countries, the goal of Econ@Tel was to develop a strategic research and training network among key people and organizations in order to enhance Europe's competence in the field of telecommunications economics. Reflecting the organization of the COST Action IS0605 Econ@Tel in working groups the following four major research areas are addressed: - evolution and regulation of communication ecosystems; - social and policy implications of communication technologies; - economics and governance of future networks; - future networks management architectures and mechanisms

    In Pursuit of Desirable Equilibria in Large Scale Networked Systems

    Get PDF
    This thesis addresses an interdisciplinary problem in the context of engineering, computer science and economics: In a large scale networked system, how can we achieve a desirable equilibrium that benefits the system as a whole? We approach this question from two perspectives. On the one hand, given a system architecture that imposes certain constraints, a system designer must propose efficient algorithms to optimally allocate resources to the agents that desire them. On the other hand, given algorithms that are used in practice, a performance analyst must come up with tools that can characterize these algorithms and determine when they can be optimally applied. Ideally, the two viewpoints must be integrated to obtain a simple system design with efficient algorithms that apply to it. We study the design of incentives and algorithms in such large scale networked systems under three application settings, referred to herein via the subheadings: Incentivizing Sharing in Realtime D2D Networks: A Mean Field Games Perspective, Energy Coupon: A Mean Field Game Perspective on Demand Response in Smart Grids, Dynamic Adaptability Properties of Caching Algorithms, and Accuracy vs. Learning Rate of Multi-level Caching Algorithms. Our application scenarios all entail an asymptotic system scaling, and an equilibrium is defined in terms of a probability distribution over system states. The question in each case is to determine how to attain a probability distribution that possesses certain desirable properties. For the first two applications, we consider the design of specific mechanisms to steer the system toward a desirable equilibrium under self interested decision making. The environments in these problems are such that there is a set of shared resources, and a mechanism is used during each time step to allocate resources to agents that are selfish and interact via a repeated game. These models are motivated by resource sharing systems in the context of data communication, transportation, and power transmission networks. The objective is to ensure that the achieved equilibria are socially desirable. Formally, we show that a Mean Field Game can be used to accurately approximate these repeated game frameworks, and we describe mechanisms under which socially desirable Mean Field Equilibria exist. For the third application, we focus on performance analysis via new metrics to determine the value of the attained equilibrium distribution of cache contents when using different replacement algorithms in cache networks. The work is motivated by the fact that typical performance analysis of caching algorithms consists of determining hit probability under a fixed arrival process of requests, which does not account for dynamic variability of request arrivals. Our main contribution is to define a function which accounts for both the error due to time lag of learning the items' popularity, as well as error due to the inaccuracy of learning, and to characterize the tradeoff between the two that conventional algorithms achieve. We then use the insights gained in this exercise to design new algorithms that are demonstrably superior

    Telecommunication Economics

    Get PDF
    This book constitutes a collaborative and selected documentation of the scientific outcome of the European COST Action IS0605 Econ@Tel "A Telecommunications Economics COST Network" which run from October 2007 to October 2011. Involving experts from around 20 European countries, the goal of Econ@Tel was to develop a strategic research and training network among key people and organizations in order to enhance Europe's competence in the field of telecommunications economics. Reflecting the organization of the COST Action IS0605 Econ@Tel in working groups the following four major research areas are addressed: - evolution and regulation of communication ecosystems; - social and policy implications of communication technologies; - economics and governance of future networks; - future networks management architectures and mechanisms

    In Pursuit of Desirable Equilibria in Large Scale Networked Systems

    Get PDF
    This thesis addresses an interdisciplinary problem in the context of engineering, computer science and economics: In a large scale networked system, how can we achieve a desirable equilibrium that benefits the system as a whole? We approach this question from two perspectives. On the one hand, given a system architecture that imposes certain constraints, a system designer must propose efficient algorithms to optimally allocate resources to the agents that desire them. On the other hand, given algorithms that are used in practice, a performance analyst must come up with tools that can characterize these algorithms and determine when they can be optimally applied. Ideally, the two viewpoints must be integrated to obtain a simple system design with efficient algorithms that apply to it. We study the design of incentives and algorithms in such large scale networked systems under three application settings, referred to herein via the subheadings: Incentivizing Sharing in Realtime D2D Networks: A Mean Field Games Perspective, Energy Coupon: A Mean Field Game Perspective on Demand Response in Smart Grids, Dynamic Adaptability Properties of Caching Algorithms, and Accuracy vs. Learning Rate of Multi-level Caching Algorithms. Our application scenarios all entail an asymptotic system scaling, and an equilibrium is defined in terms of a probability distribution over system states. The question in each case is to determine how to attain a probability distribution that possesses certain desirable properties. For the first two applications, we consider the design of specific mechanisms to steer the system toward a desirable equilibrium under self interested decision making. The environments in these problems are such that there is a set of shared resources, and a mechanism is used during each time step to allocate resources to agents that are selfish and interact via a repeated game. These models are motivated by resource sharing systems in the context of data communication, transportation, and power transmission networks. The objective is to ensure that the achieved equilibria are socially desirable. Formally, we show that a Mean Field Game can be used to accurately approximate these repeated game frameworks, and we describe mechanisms under which socially desirable Mean Field Equilibria exist. For the third application, we focus on performance analysis via new metrics to determine the value of the attained equilibrium distribution of cache contents when using different replacement algorithms in cache networks. The work is motivated by the fact that typical performance analysis of caching algorithms consists of determining hit probability under a fixed arrival process of requests, which does not account for dynamic variability of request arrivals. Our main contribution is to define a function which accounts for both the error due to time lag of learning the items' popularity, as well as error due to the inaccuracy of learning, and to characterize the tradeoff between the two that conventional algorithms achieve. We then use the insights gained in this exercise to design new algorithms that are demonstrably superior

    The economic effects of network neutrality: a policy perspective

    Get PDF
    Network neutrality - regulation of Internet service providers (ISPs) to ensure equal treatment of all traffic - is becoming something many people have heard about. While the context is technical, network neutrality ultimately boils down to economics. The political weight of the subject is heavy, and the international debate is fierce. Still, surprisingly little rigorous research appears to be behind it. In this paper, I review economic literature on network neutrality and ISP regulation, covering both practical and theoretical implications for the broadband market. I define the degrees of network neutrality with more granularity than papers so far, evaluate the qualitative economic effects of regulation, and describe the broadband market, frameworks for modeling it, and its peculiar economic characteristics. In particular, I review and compare different theoretical modeling approaches and models' predictions of the welfare effects of different regulatory regimes. Throughout the paper, I incorporate economic literature from relevant areas into the analysis. I do not make definite policy recommendations, but I draw conclusions that are potentially of interest from a policy point of view. My analysis would indicate that the complexity of the Internet ecosystem and interrelations between market participants make effective regulation difficult. There is no economic evidence that network neutrality generally increases total welfare. In fact, it turns out that from a well-rounded economic perspective, strong network neutrality appears in most cases as detrimental to both consumer surplus and total welfare. In certain scenarios, however, models predict that neutrality can increase static and dynamic efficiency. The results depend crucially on model specifications and parameters, which differ significantly across the literature. So far, there is no consensus among economists on the optimal level of ISP regulation. Market-driven solutions such as dynamic pricing might provide a way to circumvent the neutrality question. Verkkoneutraliteetti - teleoperaattorien sÀÀntely tietoliikenteen tasa-arvoisen kohtelun varmistamiseksi - on astunut kÀsitteenÀ julkisuuteen. Vaikka konteksti onkin tekninen, verkkoneutraliteetti viime kÀdessÀ redusoituu taloustieteeseen. Aiheen poliittinen painoarvo on suuri ja kansainvÀlinen keskustelu kiivasta. TÀstÀ huolimatta sen takaa vaikuttaa löytyvÀn yllÀttÀvÀn vÀhÀn tieteellistÀ tutkimusta. LopputyössÀni tarkastelen taloustieteellistÀ kirjallisuutta verkkoneutraliteetista ja teleoperaattorien sÀÀntelystÀ ja sen vaikutuksia laajakaistamarkkinaan kÀytÀnnöllisestÀ kuin myös teoreettisesta nÀkökulmasta. MÀÀrittelen verkkoneutraliteetin asteet hienojakoisemmin kuin aikaisemmat julkaisut, arvioin sÀÀntelyn laadullisia vaikutuksia ja kuvailen laajakaistamarkkinaa, viitekehyksiÀ sen mallintamiseksi sekÀ sen eriskummallisia taloudellisia piirteitÀ. Kuvaan teoreettisia lÀhestymistapoja ja merkittÀvimpien mallien ennusteita sÀÀntelymallien hyvinvointivaikutuksista. LiitÀn analyysini relevanttiin taloustieteelliseen kirjallisuuteen. En anna suoria politiikkasuosituksia, mutta teen johtopÀÀtöksiÀ, jotka ovat mahdollisesti mielenkiintoisia poliittisesta nÀkökulmasta. Analyysini perusteella vaikuttaa, ettÀ Internet-ekosysteemin monimutkaisuus ja toimijoiden vÀliset suhteet tekevÀt tehokkaasta sÀÀntelystÀ vaikeaa. TaloustieteellistÀ nÀyttöÀ verkkoneutraliteetin hyvinvointia kasvattavista vaikutuksista ei ole. Tasapainoisesta taloudellisesta nÀkökulmasta katsottuna tiukka neutraliteettisÀÀntely nÀyttÀÀ useimmissa tapauksissa sekÀ pienentÀvÀn kuluttajan ylijÀÀmÀÀ ettÀ laskevan kokonaishyvinvointia. Joissakin skenaarioissa mallit toisaalta ennustavat neutraliteetin lisÀÀvÀn staattista ja dynaamista tehokkuutta. Tulokset riippuvat rajusti mallin rakenteesta ja parametreistÀ, jotka vaihtelevat merkittÀvÀsti tutkimuksesta tutkimukseen. Toistaiseksi taloustieteilijÀt eivÀt ole pÀÀsseet yhteisymmÀrrykseen optimaalisesta teleoperaattorien sÀÀntelyn asteesta. MarkkinalÀhtöiset ratkaisut kuten dynaaminen hinnoittelu saattavat mahdollistaa neutraliteettikysymyksen kiertÀmisen

    SOFTWARE INTEROPERABILITY: Issues at the Intersection between Intellectual Property and Competition Policy

    Get PDF
    The dissertation project proceeds through three papers, analyzing issues related to software interoperability and respectively pertaining to one of the three following interdependent levels of analysis. The first level addresses the legal status of software interoperability information under current intellectual property law (focusing on copyright law, which is the main legal tool for the protection of these pieces of code), trying to clarify if, how and to what extent theses pieces of code (and the associated pieces of information) are protected erga omnes by the law. The second level complements the first one, analyzing legal and economic issues related to the technical possibility of actually accessing this interoperability information through reverse engineering (and software decompilation in particular). Once a de facto standard gains the favor of the market, reverse engineering is the main self-help tool available to competitors in order to achieve interoperability and compete “inside this standard”. The third step consists in recognizing that – in a limited number of cases, but which are potentially of great economic relevance – market failures could arise, despite any care taken in devising checks and balances in the legal setting concerning both the legal status of interoperability information and the legal rules governing software reverse engineering. When this is the case, some undertakings may stably gain a dominant position in software markets, and possibly abuse it. Hence, at this level of analysis, competition policy intervention is taken into account. The first paper of the present dissertation shows that interoperability specifications are not protected by copyright. In the paper, I argue that existing doubts and uncertainty are typically related to a poor understanding of the technical nature of software interfaces. To remedy such misunderstanding, the paper focuses on the distinction between interface specifications and implementations and stresses the difference between the steps needed to access to the ideas and principle constituting an interfaces specification and the re-implementation of a functionally equivalent interface through new software code. At the normative level, the paper shows that no major modifications to the existing model of legal protection of software (and software interfaces) are needed; however, it suggests that policymakers could reduce the Fear of legal actions, other forms of legal Uncertainty and several residual Doubts (FUD) by explicitly stating that interface specifications are unprotectable and freely appropriable. In the second paper, I offer a critique of legal restraints on software reverse engineering, focusing in particular on Europe, but considering also similar restraints in the US, in particular in the context of the Digital Millennium Copyright Act. Through an analysis of entry conditions for late comers and of the comparative costs of developing programs in the first place or reverse engineering them, the paper shows that limitations on decompilation imposed by article 6 of the Software Directive were mostly superfluous and basically non-binding at the time of drafting. What is more, the paper shows that nowadays new – and largely unanticipated – developments in software development models (e.g. open source) make these restraints an obstacle to competition against dominant incumbent controlling software platforms. In fact, limitations on the freedom to decompile obstacle major reverse engineering projects performed in a decentralized way, as in the context of an open source community. Hence, since open source projects are the most credible tools to recreate some competitive pressure in a number of crucial software markets, the paper recommends creating a simpler and clear-cut safe harbor for software reverse engineering. The third paper claims that, in software markets, refusal-to-deal (or “information-withholding”) strategies are normally complementary with tying (or “predatory-innovation”) strategies, and that this complementarity is so relevant that dominant platform controllers need to couple both in order to create significant anti- competitive effects. Hence, the paper argues that mandatory unbundling (i.e. mandating a certain degree of modularity in software development) could be an appropriate – and frequently preferable – alternative to mandatory disclosure of interoperability information. However, considering the critiques moved from part of the literature to the Commission’s Decision in the recent European Microsoft antitrust case, an objection to the previous argument could be that – also in the case of mandatory unbundling – one should still determine the minimum price for the unbundled product. The last part of the paper applies some intuitions coming from the literature concerning complementary oligopoly to demonstrate that this objection is not well grounded and that – in software markets – mandatory unbundling (modularity) may be a useful policy even if the only constraint on the price of the unbundled good is the one of non-negativity

    User-centric power-friendly quality-based network selection strategy for heterogeneous wireless environments

    Get PDF
    The ‘Always Best Connected’ vision is built around the scenario of a mobile user seamlessly roaming within a multi-operator multi-technology multi-terminal multi-application multi-user environment supported by the next generation of wireless networks. In this heterogeneous environment, users equipped with multi-mode wireless mobile devices will access rich media services via one or more access networks. All these access networks may differ in terms of technology, coverage range, available bandwidth, operator, monetary cost, energy usage etc. In this context, there is a need for a smart network selection decision to be made, to choose the best available network option to cater for the user’s current application and requirements. The decision is a difficult one, especially given the number and dynamics of the possible input parameters. What parameters are used and how those parameters model the application requirements and user needs is important. Also, game theory approaches can be used to model and analyze the cooperative or competitive interaction between the rational decision makers involved, which are users, seeking to get good service quality at good value prices, and/or the network operators, trying to increase their revenue. This thesis presents the roadmap towards an ‘Always Best Connected’ environment. The proposed solution includes an Adapt-or-Handover solution which makes use of a Signal Strength-based Adaptive Multimedia Delivery mechanism (SAMMy) and a Power-Friendly Access Network Selection Strategy (PoFANS) in order to help the user in taking decisions, and to improve the energy efficiency at the end-user mobile device. A Reputation-based System is proposed, which models the user-network interaction as a repeated cooperative game following the repeated Prisoner’s Dilemma game from Game Theory. It combines reputation-based systems, game theory and a network selection mechanism in order to create a reputation-based heterogeneous environment. In this environment, the users keep track of their individual history with the visited networks. Every time, a user connects to a network the user-network interaction game is played. The outcome of the game is a network reputation factor which reflects the network’s previous behavior in assuring service guarantees to the user. The network reputation factor will impact the decision taken by the user next time, when he/she will have to decide whether to connect or not to that specific network. The performance of the proposed solutions was evaluated through in-depth analysis and both simulation-based and experimental-oriented testing. The results clearly show improved performance of the proposed solutions in comparison with other similar state-of-the-art solutions. An energy consumption study for a Google Nexus One streaming adaptive multimedia was performed, and a comprehensive survey on related Game Theory research are provided as part of the work
    corecore