4,072 research outputs found

    Local Broadband Access: Primum Non Nocere or Primum Processi - A Property Rights Approach

    Get PDF
    High-speed or "broadband" Internet access currently is provided, at the local level, chiefly by cable television and telephone companies, often in competition with each other. Wireless and satellite providers have a small but growing share of this business. An influential coalition of economic interests and academics have proposed that local broadband Internet access providers be prohibited from restricting access to their systems by upstream suppliers of Internet services. A recent term for this proposal is "net neutrality." We examine the potential costs and benefits of such a policy from an economic welfare perspective. Using a property rights approach, we ask whether transactions costs in the market for access rights are likely to be significant, and if so, whether owners of physical local broadband platforms are likely to be more or less efficient holders of access rights than Internet content providers. We conclude that transactions costs are likely to be lower if access rights are assigned initially to platform owners rather than content providers. In addition, platform hardware owners are likely to be more efficient holders of these rights because they can internalize demand-side interactions among content products. Further, failure to permit platform owners to control access threatens to result in inadequate incentives to invest in, to maintain, and to upgrade local broadband platforms. Inefficiently denying platform owners the ability to own access rights implies a need for price regulation; otherwise, there will be incentives to use pricing to circumvent the constraint on rights ownership. Price regulation is itself known to induce welfare losses through adaptive behavior of the constrained firm. The impact on welfare might produce a worse result than the initial problem, assuming one existed. Much of the academic interest in net neutrality arises from the belief that the open architecture of the Internet under current standards has been responsible for its remarkable success, and a wish to preserve this openness. We point out that the openness of the Internet was an unintended consequence of its military origins, and that other, less open, architectures might have been even more successful. A policy of denying platform owners the ability to own access rights could freeze the architecture of the Internet, preventing it from adapting to future technological and economic developments. Finally, we examine the net neutrality issue from the perspective of the "essential facility doctrine," a tool of the common law of antitrust. The doctrine establishes conditions under which federal courts will mandate access by competitors to the monopoly platform of a vertically-integrated firm. Because local broadband Internet access is not today a bottleneck monopoly (there are several competitors and the market is at an early stage of development), the essential facilities doctrine would not permit reassignment of access rights from platform owners to competitors. We conclude that "net neutrality" is a welfare-reducing policy proposal.Technology and Industry, Regulatory Reform

    A Survey on Consensus Mechanisms and Mining Strategy Management in Blockchain Networks

    Full text link
    © 2013 IEEE. The past decade has witnessed the rapid evolution in blockchain technologies, which has attracted tremendous interests from both the research communities and industries. The blockchain network was originated from the Internet financial sector as a decentralized, immutable ledger system for transactional data ordering. Nowadays, it is envisioned as a powerful backbone/framework for decentralized data processing and data-driven self-organization in flat, open-access networks. In particular, the plausible characteristics of decentralization, immutability, and self-organization are primarily owing to the unique decentralized consensus mechanisms introduced by blockchain networks. This survey is motivated by the lack of a comprehensive literature review on the development of decentralized consensus mechanisms in blockchain networks. In this paper, we provide a systematic vision of the organization of blockchain networks. By emphasizing the unique characteristics of decentralized consensus in blockchain networks, our in-depth review of the state-of-the-art consensus protocols is focused on both the perspective of distributed consensus system design and the perspective of incentive mechanism design. From a game-theoretic point of view, we also provide a thorough review of the strategy adopted for self-organization by the individual nodes in the blockchain backbone networks. Consequently, we provide a comprehensive survey of the emerging applications of blockchain networks in a broad area of telecommunication. We highlight our special interest in how the consensus mechanisms impact these applications. Finally, we discuss several open issues in the protocol design for blockchain consensus and the related potential research directions

    Virtual sensor networks: collaboration and resource sharing

    Get PDF
    This thesis contributes to the advancement of the Sensing as a Service (SeaaS), based on cloud infrastructures, through the development of models and algorithms that make an efficient use of both sensor and cloud resources while reducing the delay associated with the data flow between cloud and client sides, which results into a better quality of experience for users. The first models and algorithms developed are suitable for the case of mashups being managed at the client side, and then models and algorithms considering mashups managed at the cloud were developed. This requires solving multiple problems: i) clustering of compatible mashup elements; ii) allocation of devices to clusters, meaning that a device will serve multiple applications/mashups; iii) reduction of the amount of data flow between workplaces, and associated delay, which depends on clustering, device allocation and placement of workplaces. The developed strategies can be adopted by cloud service providers wishing to improve the performance of their clouds. Several steps towards an efficient Se-aaS business model were performed. A mathematical model was development to assess the impact (of resource allocations) on scalability, QoE and elasticity. Regarding the clustering of mashup elements, a first mathematical model was developed for the selection of the best pre-calculated clusters of mashup elements (virtual Things), and then a second model is proposed for the best virtual Things to be built (non pre-calculated clusters). Its evaluation is done through heuristic algorithms having such model as a basis. Such models and algorithms were first developed for the case of mashups managed at the client side, and after they were extended for the case of mashups being managed at the cloud. For the improvement of these last results, a mathematical programming optimization model was developed that allows optimal clustering and resource allocation solutions to be obtained. Although this is a computationally difficult approach, the added value of this process is that the problem is rigorously outlined, and such knowledge is used as a guide in the development of better a heuristic algorithm.Esta tese contribui para o avanço tecnolĂłgico do modelo de Sensing as a Service (Se-aaS), baseado em infraestrutura cloud, atravĂ©s do desenvolvimento de modelos e algoritmos que resolvem o problema da alocação eficiente de recursos, melhorando os mĂ©todos e tĂ©cnicas atuais e reduzindo os tempos associados `a transferĂȘncia dos dados entre a cloud e os clientes, com o objetivo de melhorar a qualidade da experiĂȘncia dos seus utilizadores. Os primeiros modelos e algoritmos desenvolvidos sĂŁo adequados para o caso em que as mashups sĂŁo geridas pela aplicação cliente, e posteriormente foram desenvolvidos modelos e algoritmos para o caso em que as mashups sĂŁo geridas pela cloud. Isto implica ter de resolver mĂșltiplos problemas: i) Construção de clusters de elementos de mashup compatĂ­veis; ii) Atribuição de dispositivos fĂ­sicos aos clusters, acabando um dispositivo fĂ­sico por servir mÂŽ mĂșltiplas aplicaçÔes/mashups; iii) Redução da quantidade de transferĂȘncia de dados entre os diversos locais da cloud, e consequentes atrasos, o que dependente dos clusters construĂ­dos, dos dispositivos atribuĂ­dos aos clusters e dos locais da cloud escolhidos para realizar o processamento necessĂĄrio. As diferentes estratĂ©gias podem ser adotadas por fornecedores de serviço cloud que queiram melhorar o desempenho dos seus serviços.(


    Transparent Spectrum Co-Access in Cognitive Radio Networks

    Get PDF
    The licensed wireless spectrum is currently under-utilized by as much as 85%. Cognitive radio networks have been proposed to employ dynamic spectrum access to share this under-utilized spectrum between licensed primary user transmissions and unlicensed secondary user transmissions. Current secondary user opportunistic spectrum access methods, however, remain limited in their ability to provide enough incentive to convince primary users to share the licensed spectrum, and they rely on primary user absence to guarantee secondary user performance. These challenges are addressed by developing a Dynamic Spectrum Co-Access Architecture (DSCA) that allows secondary user transmissions to co-access transparently and concurrently with primary user transmissions. This work exploits dirty paper coding to precode the cognitive radio channel utilizing the redundant information found in primary user relay networks. Subsequently, the secondary user is able to provide incentive to the primary user through increased SINR to encourage licensed spectrum sharing. Then a region of co-accessis formulated within which any secondary user can co-access the licensed channel transparently to the primary user. In addition, a Spectrum Co-Access Protocol (SCAP) is developed to provide secondary users with guaranteed channel capacity and while minimizing channel access times. The numerical results show that the SCAP protocol build on the DSCA architecture is able to reduce secondary user channel access times compared with opportunistic spectrum access and increased secondary user network throughput. Finally, we present a novel method for increasing the secondary user channel capacity through sequential dirty paper coding. By exploiting similar redundancy in secondary user multi-hop networks as in primary user relay networks, the secondary user channel capacity can be increased. As a result of our work in overlay spectrum sharing through secondary user channel precoding, we provide a compelling argument that the current trend towards opportunistic spectrum sharing needs to be reconsidered. This work asserts that limitations of opportunistic spectrum access to transparently provide primary users incentive and its detrimental effect on secondary user performance due to primary user activity are enough to motivate further study into utilizing channel precoding schemes. The success of cognitive radios and its adoption into federal regulator policy will rely on providing just this type of incentive
    • 

    corecore