18 research outputs found

    Design and Implementation of a Distributed Mobility Management Entity (MME) on OpenStack

    Get PDF
    Network Functions Virtualisation (NFV) involves the implementation of network functions, for example firewalls and routers, as software applications that can run on general-purpose servers. In present-day networks, each network function is typically implemented on dedicated and proprietary hardware. By utilising virtualisation technologies, NFV enables network functions to be deployed on cloud computing infrastructure in data centers. This thesis discusses the application of NFV to the Evolved Packet Core (EPC) in Long Term Evolution (LTE) networks; specifically to the Mobility Management Entity (MME), a control plane entity in the EPC. With the convergence of cloud computing and mobile networks, conventional architectures of network elements need to be re-designed in order to fully harness benefits such as scalability and elasticity. To this end, we design and implement a distributed MME with a three-tier architecture common to web applications. We highlight design considerations for moving MME functionality to the cloud and compare our new distributed design to that of a standalone MME. We deploy and test the distributed MME on two separate OpenStack clouds. Our results indicate that the benefits of scalability and resilience can outweigh the marginal increase in latency for EPC procedures. We find that the latency is dependent on the actual placement of MME components within the data center. Also, we believe that extensions to the OpenStack platform are required before it can meet performance and availability requirements for telecommunication applications

    Energy-Efficient Service Placement for Latency-Sensitive Applications in Edge Computing

    Get PDF
    Edge computing is a promising solution to host artificial intelligence (AI) applications that enable real-time insights on user-generated and device-generated data. This requires edge computing resources (storage and compute) to be widely deployed close to end devices. Such edge deployments require a large amount of energy to run as edge resources are typically overprovisioned to flexibly meet the needs of time-varying user demand with a low latency. Moreover, AI applications rely on deep neural network (DNN) models that are increasingly larger in size to support high accuracy. These DNN models must be efficiently stored and transferred, so as to minimize their energy consumption. In this article, we model the problem of energy-efficient placement of services (namely, DNN models) for AI applications as a multiperiod optimization problem. The formulation jointly places services and schedules requests such that the overall energy consumption is minimized and latency is low. We propose a heuristic that efficiently solves the problem while taking into account the impact of placing services across time periods. We assess the quality of the proposed heuristic by comparing its solution to a lower bound of the problem, obtained by formulating and solving a Lagrangian relaxation of the original problem. Extensive simulations show that our proposed heuristic outperforms baseline approaches in achieving a low energy consumption by packing services on a minimal number of edge nodes, while at the same time keeping the average latency of served requests below a configured threshold in nearly all time periods.Peer reviewe

    Here To Stay: A Quantitative Comparison of Virtual Object Stability in Markerless Mobile AR

    Get PDF
    Mobile augmented reality (AR) has the potential to enable immersive, natural interactions between humans and cyber-physical systems. In particular markerless AR, by not relying on fiducial markers or predefined images, provides great convenience and flexibility for users. However, unwanted virtual object movement frequently occurs in markerless smartphone AR due to inaccurate scene understanding, and resulting errors in device pose tracking. We examine the factors which may affect virtual object stability, design experiments to measure it, and conduct systematic quantitative characterizations across six different user actions and five different smartphone configurations. Our study demonstrates noticeable instances of spatial instability in virtual objects in all but the simplest settings (with position errors of greater than 10cm even on the best-performing smartphones), and underscores the need for further enhancements to pose tracking algorithms for smartphone-based markerless AR.Peer reviewe

    Energy Efficient Techniques For Algorithmic Analog-To-Digital Converters

    Get PDF
    Analog-to-digital converters (ADCs) are key design blocks in state-of-art image, capacitive, and biomedical sensing applications. In these sensing applications, algorithmic ADCs are the preferred choice due to their high resolution and low area advantages. Algorithmic ADCs are based on the same operating principle as that of pipelined ADCs. Unlike pipelined ADCs where the residue is transferred to the next stage, an N-bit algorithmic ADC utilizes the same hardware N-times for each bit of resolution. Due to the cyclic nature of algorithmic ADCs, many of the low power techniques applicable to pipelined ADCs cannot be directly applied to algorithmic ADCs. Consequently, compared to those of pipelined ADCs, the traditional implementations of algorithmic ADCs are power inefficient. This thesis presents two novel energy efficient techniques for algorithmic ADCs. The first technique modifies the capacitors' arrangement of a conventional flip-around configuration and amplifier sharing technique, resulting in a low power and low area design solution. The other technique is based on the unit multiplying-digital-to-analog-converter approach. The proposed approach exploits the power saving advantages of capacitor-shared technique and capacitor-scaled technique. It is shown that, compared to conventional techniques, the proposed techniques reduce the power consumption of algorithmic ADCs by more than 85\%. To verify the effectiveness of such approaches, two prototype chips, a 10-bit 5 MS/s and a 12-bit 10 MS/s ADCs, are implemented in a 130-nm CMOS process. Detailed design considerations are discussed as well as the simulation and measurement results. According to the simulation results, both designs achieve figures-of-merit of approximately 60 fJ/step, making them some of the most power efficient ADCs to date

    Video Caching, Analytics and Delivery at the Wireless Edge: A Survey and Future Directions

    Get PDF
    Future wireless networks will provide high bandwidth, low-latency, and ultra-reliable Internet connectivity to meet the requirements of different applications, ranging from mobile broadband to the Internet of Things. To this aim, mobile edge caching, computing, and communication (edge-C3) have emerged to bring network resources (i.e., bandwidth, storage, and computing) closer to end users. Edge-C3 allows improving the network resource utilization as well as the quality of experience (QoE) of end users. Recently, several video-oriented mobile applications (e.g., live content sharing, gaming, and augmented reality) have leveraged edge-C3 in diverse scenarios involving video streaming in both the downlink and the uplink. Hence, a large number of recent works have studied the implications of video analysis and streaming through edge-C3. This article presents an in-depth survey on video edge-C3 challenges and state-of-the-art solutions in next-generation wireless and mobile networks. Specifically, it includes: a tutorial on video streaming in mobile networks (e.g., video encoding and adaptive bitrate streaming); an overview of mobile network architectures, enabling technologies, and applications for video edge-C3; video edge computing and analytics in uplink scenarios (e.g., architectures, analytics, and applications); and video edge caching, computing and communication methods in downlink scenarios (e.g., collaborative, popularity-based, and context-aware). A new taxonomy for video edge-C3 is proposed and the major contributions of recent studies are first highlighted and then systematically compared. Finally, several open problems and key challenges for future research are outlined

    Sound Event Detection with Binary Neural Networks on Tightly Power-Constrained IoT Devices

    Full text link
    Sound event detection (SED) is a hot topic in consumer and smart city applications. Existing approaches based on Deep Neural Networks are very effective, but highly demanding in terms of memory, power, and throughput when targeting ultra-low power always-on devices. Latency, availability, cost, and privacy requirements are pushing recent IoT systems to process the data on the node, close to the sensor, with a very limited energy supply, and tight constraints on the memory size and processing capabilities precluding to run state-of-the-art DNNs. In this paper, we explore the combination of extreme quantization to a small-footprint binary neural network (BNN) with the highly energy-efficient, RISC-V-based (8+1)-core GAP8 microcontroller. Starting from an existing CNN for SED whose footprint (815 kB) exceeds the 512 kB of memory available on our platform, we retrain the network using binary filters and activations to match these memory constraints. (Fully) binary neural networks come with a natural drop in accuracy of 12-18% on the challenging ImageNet object recognition challenge compared to their equivalent full-precision baselines. This BNN reaches a 77.9% accuracy, just 7% lower than the full-precision version, with 58 kB (7.2 times less) for the weights and 262 kB (2.4 times less) memory in total. With our BNN implementation, we reach a peak throughput of 4.6 GMAC/s and 1.5 GMAC/s over the full network, including preprocessing with Mel bins, which corresponds to an efficiency of 67.1 GMAC/s/W and 31.3 GMAC/s/W, respectively. Compared to the performance of an ARM Cortex-M4 implementation, our system has a 10.3 times faster execution time and a 51.1 times higher energy-efficiency.Comment: 6 pages conferenc

    Scalable networked systems: analysis and optimization

    No full text
    The communication networks of today are evolving to support a large number of heterogeneous Internet-connected devices. Several emerging applications and services rely on the growing amount of device-generated data; however, such applications place diverse requirements on the network. For instance, interactive applications such as augmented reality demand very low latency for a satisfactory user experience. On the other hand, automotive applications require very reliable transmission and processing of data to ensure that accidents do not occur. To this end, new technologies have been proposed in the communication network to address these diverse connectivity and computational requirements. In the radio access networks, secondary access technologies comprising WiFi and white space are used to supplement cellular connectivity. New classes of radio technologies, such as long range (LoRa), have emerged for connecting resource-constrained devices. Additionally, processing and storage resources are being placed closer to the end-devices to efficiently process their data under the edge computing paradigm. This dissertation investigates the scalability of communication networks through intelligent network design, analysis, and management. In particular, it proposes novel solutions to manage different components in a network. The overall goal is to ensure that communication networks efficiently support both the connectivity and the computing requirements of a large number of heterogeneous devices. First, we investigate the role of secondary access networks in providing scalable connectivity to devices. Specifically, we propose new algorithms to maximize the traffic that is offloaded to white space and WiFi, thereby resulting in significantly more capacity in the cellular spectrum. Next, we investigate Long Range (LoRa) communications to enable large-scale connectivity for resource-constrained and battery-powered devices. In particular, we propose novel optimization models to manage the LoRa communication parameters to support reliable communications from massive densities of such devices in urban areas. Finally, we investigate the deployment of a communication infrastructure with edge computing capabilities to efficiently process large volumes of device-generated data. In particular, we experimentally characterize the impact of edge computing in supporting data-intensive applications. Additionally, we present a novel approach to optimally place edge devices in an urban environment to support both connectivity of cars and reliable processing of their data

    Advances in cloud computing, wireless communications and the internet of things

    No full text
    There is a growing amount of data generated by a variety of devices in the Internet of Things (IoT). Sharing economy applications can leverage such data to provide solutions of high societal impact. Several technologies together enable the collaborative use of data through software services. This chapter describes the key developments in these technological areas. In particular, it describes advances in cloud computing that have resulted in new software architectures and deployment practices. Such improvements enable the rapid creation and deployment of new services on the cloud. Next, it highlights recent developments in wireless networks that allow heterogeneous devices to connect and share information. Furthermore, this chapter describes how IoT platforms are becoming interoperable, thus fostering collaborative access to data from diverse devices. Finally, it elaborates on how the described technologies jointly enable new sharing economy solutions through a case study on car sharing.Peer reviewe

    Adaptive configuration of LoRa networks for dense IoT deployments

    No full text
    Large-scale Internet of Things (IoT) deployments demand long-range wireless communications, especially in urban and metropolitan areas. LoRa is one of the most promising technologies in this context due to its simplicity and flexibility. Indeed, deploying LoRa networks in dense IoT scenarios must achieve two main goals: efficient communications among a large number of devices and resilience against dynamic channel conditions due to demanding environmental settings (e.g., the presence of many buildings). This work investigates adaptive mechanisms to configure the communication parameters of LoRa networks in dense IoT scenarios. To this end, we develop FLoRa, an open-source framework for end-to-end LoRa simulations in OMNeT++. We then implement and evaluate the Adaptive Data Rate (ADR) mechanism built into LoRa to dynamically manage link parameters for scalable and efficient network operations. Extensive simulations show that ADR is effective in increasing the network delivery ratio under stable channel conditions, while keeping the energy consumption low. Our results also show that the performance of ADR is severely affected by a highly-varying wireless channel. We thereby propose an improved version of the original ADR mechanism to cope with variable channel conditions. Our proposed solution significantly increases both the reliability and the energy efficiency of communications over a noisy channel, almost irrespective of the network size. Finally, we show that the delivery ratio of very dense networks can be further improved by using a network-aware approach, wherein the link parameters are configured based on the global knowledge of the network.Peer reviewe
    corecore