19 research outputs found

    NFV Based Gateways for Virtualized Wireless Sensors Networks: A Case Study

    Full text link
    Virtualization enables the sharing of a same wireless sensor network (WSN) by multiple applications. However, in heterogeneous environments, virtualized wireless sensor networks (VWSN) raises new challenges such as the need for on-the-fly, dynamic, elastic and scalable provisioning of gateways. Network Functions Virtualization (NFV) is an emerging paradigm that can certainly aid in tackling these new challenges. It leverages standard virtualization technology to consolidate special-purpose network elements on top of commodity hardware. This article presents a case study on NFV based gateways for VWSNs. In the study, a VWSN gateway provider, operates and manages an NFV based infrastructure. We use two different brands of wireless sensors. The NFV infrastructure makes possible the dynamic, elastic and scalable deployment of gateway modules in this heterogeneous VWSN environment. The prototype built with Openstack as platform is described

    All-photonic multiplexed quantum repeaters based on concatenated bosonic and discrete-variable quantum codes

    Full text link
    Long distance quantum communication will require the use of quantum repeaters to overcome the exponential attenuation of signal with distance. One class of such repeaters utilizes quantum error correction to overcome losses in the communication channel. Here we propose a novel strategy of using the bosonic Gottesman-Kitaev-Preskill (GKP) code in a two-way repeater architecture with multiplexing. The crucial feature of the GKP code that we make use of is the fact that GKP qubits easily admit deterministic two-qubit gates, hence allowing for multiplexing without the need for generating large cluster states as required in previous all-photonic architectures based on discrete-variable codes. Moreover, alleviating the need for such clique-clusters entails that we are no longer limited to extraction of at most one end-to-end entangled pair from a single protocol run. In fact, thanks to the availability of the analog information generated during the measurements of the GKP qubits, we can design better entanglement swapping procedures in which we connect links based on their estimated quality. This enables us to use all the multiplexed links so that large number of links from a single protocol run can contribute to the generation of the end-to-end entanglement. We find that our architecture allows for high-rate end-to-end entanglement generation and is resilient to imperfections arising from finite squeezing in the GKP state preparation and homodyne detection inefficiency. In particular we show that long-distance quantum communication over more than 1000 km is possible even with less than 13 dB of GKP squeezing. We also quantify the number of GKP qubits needed for the implementation of our scheme and find that for good hardware parameters our scheme requires around 103−10410^3-10^4 GKP qubits per repeater per protocol run.Comment: 31 + 25 pages, 40 figure

    Wireless sensor network virtualization: Early architecture and research perspectives

    Get PDF
    © 2015 IEEE. WSNs have become pervasive and are used in many applications and services. Usually, deployments of WSNs are task-oriented and domain-specific, thereby precluding reuse when other applications and services are contemplated. This inevitably leads to the proliferation of redundant WSN deployments. Virtualization is a technology that can aid in tackling this issue, as it enables the sharing of resources/infrastructure by multiple independent entities. In this article we critically review the state of the art and propose a novel architecture for WSN virtualization. The proposed architecture has four layers (physical layer, virtual sensor layer, virtual sensor access layer, and overlay layer) and relies on a constrained application protocol. We illustrate its potential by using it in a scenario where a single WSN is shared by multiple applications, one of which is a fire monitoring application. We present the proof-of-concept prototype we have built along with the performance measurements, and discuss future research directions

    IoT end-user applications provisioning in the cloud: State of the art

    Get PDF
    © 2016 IEEE. Internet of Things (IoT) is expected to enable a myriad of end-user applications by interconnecting physical objects. Cloud computing is a promising paradigm for provisioning IoT end-user applications in a cost-efficient manner. IoT end-user applications are provisioned in cloud settings using PaaS and offered as SaaS. This paper focuses on the PaaS aspects of IoT end-user applications provisioning. It critically reviews the state of the art. The critical review discusses the PaaS on the whole spectrum of IoT verticals and also the PaaS dealing with specific IoT verticals

    A comprehensive survey on Fog Computing: State-of-the-art and research challenges

    Get PDF
    Cloud computing with its three key facets (i.e., Infrastructure-as-a-Service, Platform-as-a-Service, and Softwareas- a-Service) and its inherent advantages (e.g., elasticity and scalability) still faces several challenges. The distance between the cloud and the end devices might be an issue for latencysensitive applications such as disaster management and content delivery applications. Service level agreements (SLAs) may also impose processing at locations where the cloud provider does not have data centers. Fog computing is a novel paradigm to address such issues. It enables provisioning resources and services outside the cloud, at the edge of the network, closer to end devices, or eventually, at locations stipulated by SLAs. Fog computing is not a substitute for cloud computing but a powerful complement. It enables processing at the edge while still offering the possibility to interact with the cloud. This paper presents a comprehensive survey on fog computing. It critically reviews the state of the art in the light of a concise set of evaluation criteria. We cover both the architectures and the algorithms that make fog systems. Challenges and research directions are also introduced. In addition, the lessons learned are reviewed and the prospects are discussed in terms of the key role fog is likely to play in emerging technologies such as tactile Internet

    Self-Optimization of LTE Networks Utilizing Celnet Xplorer

    No full text
    In order to meet demanding performance objectives in Long Term Evolution (LTE) networks, it is mandatory to implement highly efficient, autonomic self-optimization and configuration processes. Self-optimization processes have already been studied in second generation (2G) and third generation (3G) networks, typically with the objective of improving radio coverage and channel capacity. The 3rd Generation Partnership Project (3GPP) standard for LTE self-organization of networks (SON) provides guidelines on self-configuration of physical cell ID and neighbor relation function and self-optimization for mobility robustness, load balancing, and inter-cell interference reduction. While these are very important from an optimization perspective of local phenomenon (i.e., the eNodeB's interaction with its neighbors), it is also essential to architect control algorithms to optimize the network as a whole. In this paper, we propose a Celnet Xplorer-based SON architecture that allows detailed analysis of network performance combined with a SON control engine to optimize the LTE network. The network performance data is obtained in two stages. In the first stage, data is acquired through intelligent non-intrusive monitoring of the standard interfaces of the Evolved UMTS Terrestrial Radio Access Network (E-UTRAN) and Evolved Packet Core (EPC), coupled with reports from a software client running in the eNodeBs. In the second stage, powerful data analysis is performed on this data, which is then utilized as input for the SON engine. Use cases involving tracking area optimization, dynamic bearer profile reconfiguration, and tuning of network-wide coverage and capacity parameters are presented. (C) 2010 Alcatel-Lucent

    Secure base stations

    Get PDF
    With the introduction of the third generation (3G) Universal Mobile Telecommunications System (UMTS) base station router (BSR) and fourth generation (4G) base stations, such as the 3rd Generation Partnership Project (3GPP) Long Term Evolution (LTE) Evolved Node B (eNB), it has become important to secure base stations from break-in attempts by adversaries. While previous generation base stations could be considered simple voice and Internet Protocol (IP) packet transceivers, newer generation cellular base stations need to perform more of the user- and signaling functions for the cellular radio access network. If adversaries can physically break into newer base stations, they can perform a range of undesirable operations such as snooping on conversations, carrying out denial-of-service attacks on the serving area, changing the software base of the base stations, stealing authentication and encryption keys, and disrupting legitimate cellular operations. The cell-site vault is a secure processing environment designed to resist such tampering and to protect the sensitive functions associated with cellular processing. It provides an execution environment where ciphering functions, key management, and associated functions can execute without leaking sensitive information. In this paper, we present the basic principles of the cell-site vault and present an overview of the types of functions that need to be protected in future base stations for cellular networks. We address the importance of providing a trust hierarchy within the cell-site vault, we present why the vault needs to be used to establish secure and authenticated communication channels—in fact, why the vault needs to be used for most external communications—and we present why it is important to execute functions such as data re-encryption inside the vault. A femtocell or home base station is particularly vulnerable to attacks since these base stations are physically accessible by adversaries. In this paper, we focus in particular on a cell-site vault design for a femto-class base station, including its standardization efforts, as it is challenging to include both secure and nonsecure processing inside a single “system-on-a-chip.

    Tradeoff between coverage and capacity in dynamic optimization of 3g cellular networks

    No full text
    Abstract — For 3G cellular networks, capacity is an important objective, along with coverage, when characterizing the performance of high-data-rate services. In live networks, the effective network capacity heavily depends on the degree that the traffic load is balanced over all cells, so changing traffic patterns demand dynamic network reconfiguration to maintain good performance. Using a four-cell sample network, and antenna tilt, cell power level and pilot fraction as adjustment variables, we study the competitive character of network coverage and capacity in such a network optimization process, and how it compares to the CDMA-intrinsic coverage-capacity tradeoff driven by interference. We find that each set of variables provides its distinct coverage-capacity tradeoff behavior with widely varying and application-dependent performance gains. The study shows that the impact of dynamic load balancing highly depends on the choice of the tuning variable as well as the particular tradeoff range of operation
    corecore