3,091 research outputs found
Coverage and Deployment Analysis of Narrowband Internet of Things in the Wild
Narrowband Internet of Things (NB-IoT) is gaining momentum as a promising
technology for massive Machine Type Communication (mMTC). Given that its
deployment is rapidly progressing worldwide, measurement campaigns and
performance analyses are needed to better understand the system and move toward
its enhancement. With this aim, this paper presents a large scale measurement
campaign and empirical analysis of NB-IoT on operational networks, and
discloses valuable insights in terms of deployment strategies and radio
coverage performance. The reported results also serve as examples showing the
potential usage of the collected dataset, which we make open-source along with
a lightweight data visualization platform.Comment: Accepted for publication in IEEE Communications Magazine (Internet of
Things and Sensor Networks Series
Learning-based tracking area list management in 4G and 5G networks
© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksMobility management in 5G networks is a very challenging issue. It requires novel ideas and improved management so that signaling is kept minimized and far from congesting the network. Mobile networks have become massive generators of data and in the forthcoming years this data is expected to increase drastically. The use of intelligence and analytics based on big data is a good ally for operators to enhance operational efficiency and provide individualized services. This work proposes to exploit User Equipment (UE) patterns and hidden relationships from geo-spatial time series to minimize signaling due to idle mode mobility. We propose a holistic methodology to generate optimized Tracking Area Lists (TALs) in a per UE manner, considering its learned individual behavior. The k -means algorithm is proposed to find the allocation of cells into tracking areas. This is used as a basis for the TALs optimization itself, which follows a combined multi-objective and single-objective approach depending on the UE behavior. The last stage identifies UE profiles and performs the allocation of the TAL by using a neural network. The goodness of each technique has been evaluated individually and jointly under very realistic conditions and different situations. Results demonstrate important signaling reductions and good sensitivity to changing conditions.This work was supported by the Spanish National Science Council and ERFD funds under projects TEC2014-60258-C2-2-R and RTI2018-099880-B-C32.Peer ReviewedPostprint (author's final draft
Recommended from our members
Technical Review of Residential Programmable Communicating Thermostat Implementation for Title 24-2008
Recommended from our members
NVSwap Latency-Aware Paging Using Non-Volatile Main Memory
Page relocation (paging) from DRAM to swap devices is an important task of a virtual memory system in operating systems. Existing Linux paging mechanisms have two main deficiencies: (1) they may incur a high I/O latency due to write interference on solid-state disks and aggressive memory page reclaiming rate under high memory pressure and (2) they do not provide predictable latency bound for latency-sensitive applications because they cannot control the allocation of system resources among concurrent processes sharing swap devices. In this thesis, we present the design and implementation of a latency-aware paging mechanism called NVSwap. It supports a hybrid swap space using both regular secondary storage devices (e.g., solid-state disks) and non-volatile main memory (NVMM). The design is more cost-effective than using only NVMM as swap spaces. Furthermore, NVSwap uses NVMM as a persistent paging buffer to serve the page-out requests and hide the latency of paging between the regular swap device and DRAM. It supports in-situ paging for pages in the persistent paging buffer avoiding the slow I/O path. Finally, NVSwap allows users to specify latency bounds for individual processes or a group of related processes and enforces the bounds by dynamically controlling the resource allocation of NVMM and page reclaiming rate in memory among scheduling units. We have implemented a prototype of NVSwap in the Linux kernel-3.16.74. Our results demonstrate that NVSwap reduces paging latency by up to 99% and provides performance guarantee and isolation among concurrent applications sharing swap devices
Utilizing Massive Spatiotemporal Samples for Efficient and Accurate Trajectory Prediction
Trajectory prediction is widespread in mobile computing, and helps support wireless network operation, location-based services, and applications in pervasive computing. However, most prediction methods are based on very coarse geometric information such as visited base transceiver stations, which cover tens of kilometers. These approaches undermine the prediction accuracy, and thus restrict the variety of application. Recently, due to the advance and dissemination of mobile positioning technology, accurate location tracking has become prevalent. The prediction methods based on precise spatiotemporal information are then possible. Although the prediction accuracy can be raised, a massive amount of data gets involved, which is undoubtedly a huge impact on network bandwidth usage. Therefore, employing fine spatiotemporal information in an accurate prediction must be efficient. However, this problem is not addressed in many prediction methods. Consequently, this paper proposes a novel prediction framework that utilizes massive spatiotemporal samples efficiently. This is achieved by identifying and extracting the information that is beneficial to accurate prediction from the samples. The proposed prediction framework circumvents high bandwidth consumption while maintaining high accuracy and being feasible. The experiments in this study examine the performance of the proposed prediction framework. The results show that it outperforms other popular approaches
Intra-domain mobility management
Mobility supporting protocols are designed to provide connectivity of mobile nodes from any point of attachment to the Internet. Fast handoff, low signaling overhead and packet loss are the key factors to be addressed in designing a mobility management protocol. This work proposes Intra Domain Mobility Management (IDMM) protocol, based on micro-mobility concept. The protocol implements an efficient tracking mechanism for locating the mobile nodes and ensures that their movements remain transparent to communicating nodes. The protocol is designed with the hierarchical tree topology in mind that allows for low cost solution and efficient management. The optimized routing enables fast delivery of packets to the mobile node in the micro-mobility domain. IDMM is implemented using Network Simulator (ns2) tools. Packet loss, throughput, delay in the network and traffic overhead due to location management are studied. The comparison with major mobility protocols such as Mobile IP and Cellular IP is done to demonstrate the performance of IDMM under high frequency of roaming
Contextual Bandit Modeling for Dynamic Runtime Control in Computer Systems
Modern operating systems and microarchitectures provide a myriad of mechanisms for monitoring and affecting system operation and resource utilization at runtime. Dynamic runtime control of these mechanisms can tailor system operation to the characteristics and behavior of the current workload, resulting in improved performance. However, developing effective models for system control can be challenging. Existing methods often require extensive manual effort, computation time, and domain knowledge to identify relevant low-level performance metrics, relate low-level performance metrics and high-level control decisions to workload performance, and to evaluate the resulting control models.
This dissertation develops a general framework, based on the contextual bandit, for describing and learning effective models for runtime system control. Random profiling is used to characterize the relationship between workload behavior, system configuration, and performance. The framework is evaluated in the context of two applications of progressive complexity; first, the selection of paging modes (Shadow Paging, Hardware-Assisted Page) in the Xen virtual machine memory manager; second, the utilization of hardware memory prefetching for multi-core, multi-tenant workloads with cross-core contention for shared memory resources, such as the last-level cache and memory bandwidth. The resulting models for both applications are competitive in comparison to existing runtime control approaches. For paging mode selection, the resulting model provides equivalent performance to the state of the art while substantially reducing the computation requirements of profiling. For hardware memory prefetcher utilization, the resulting models are the first to provide dynamic control for hardware prefetchers using workload statistics. Finally, a correlation-based feature selection method is evaluated for identifying relevant low-level performance metrics related to hardware memory prefetching
- …