2 research outputs found
Recommended from our members
Algorithms and Experimentation for Future Wireless Networks: From Internet-of-Things to Full-Duplex
Future and next-generation wireless networks are driven by the rapidly growing wireless traffic stemming from diverse services and applications, such as the Internet-of-Things (IoT), virtual reality, autonomous vehicles, and smart intersections. Many of these applications require massive connectivity between IoT devices as well as wireless access links with ultra-high bandwidth (Gbps or above) and ultra-low latency (10ms or less). Therefore, realizing the vision of future wireless networks requires significant research efforts across all layers of the network stack. In this thesis, we use a cross-layer approach and focus on several critical components of future wireless networks including IoT systems and full-duplex (FD) wireless, and on experimentation with advanced wireless technologies in the NSF PAWR COSMOS testbed.
First, we study tracking and monitoring applications in the IoT and focus on ultra-low-power energy harvesting networks. Based on realistic hardware characteristics, we design and optimize Panda, a centralized probabilistic protocol for maximizing the neighbor discovery rate between energy harvesting nodes under a power budget. Via testbed evaluation using commercial off-the-shelf energy harvesting nodes, we show that Panda outperforms existing protocols by up to 3x in terms of the neighbor discovery rate. We further explore this problem and consider a general throughput maximization problem among a set of heterogeneous energy-constrained ultra-low-power nodes. We analytically identify the theoretical fundamental limits of the rate at which data can be exchanged between these nodes, and design the distributed probabilistic protocol, EconCast, which approaches the maximum throughput in the limiting sense. Performance evaluations of EconCast using both simulations and real-world experiments show that it achieves up to an order of magnitude higher throughput than Panda and other known protocols.
We then study FD wireless - simultaneous transmission and reception at the same frequency - a key technology that can significantly improve the data rate and reduce communication latency by employing self-interference cancellation (SIC). In particular, we focus on enabling FD on small-form-factor devices leveraging the technique of frequency-domain equalization (FDE). We design, model, and optimize the FDE-based RF canceller, which can achieve >50dB RF SIC across 20MHz bandwidth, and experimentally show that our prototyped FD radios can achieve a link-level throughput gain of 1.85-1.91x. We also focus on combining FD with phased arrays, employing optimized transmit and receive beamforming, where the spatial degrees of freedom in multi-antenna systems are repurposed to achieve wideband RF SIC. Moving up in the network stack, we study heterogeneous networks with half-duplex and FD users, and develop the novel Hybrid-Greedy Maximum Scheduling (H-GMS) algorithm, which achieves throughput optimality in a distributed manner. Analytical and simulation results show that H-GMS achieves 5-10x better delay performance and improved fairness compared with state-of-the-art approaches.
Finally, we described experimentation and measurements in the city-scale COSMOS testbed being deployed in West Harlem, New York City. COSMOS' key building blocks include software-defined radios, millimeter-wave radios, a programmable optical network, and edge cloud, and their convergence will enable researchers to remotely explore emerging technologies in a real world environment. We provide a brief overview of the testbed and focus on experimentation with advanced technologies, including the integrating of open-access FD radios in the testbed and a pilot study on converged optical-wireless x-haul networking for cloud radio access networks (C-RANs). We also present an extensive 28GHz channel measurements in the testbed area, which is a representative dense urban canyon environment, and study the corresponding signal-to-noise ratio (SNR) coverage and achievable data rates. The results of this part helped drive and validate the design of the COSMOS testbed, and can inform further deployment and experimentation in the testbed.
In this thesis, we make several theoretical and experimental contributions to ultra-low-power energy harvesting networks and the IoT, and FD wireless. We also contribute to the experimentation and measurements in the COSMOS advanced wireless testbed. We believe that these contributions are essential to connect fundamental theory to practical systems, and ultimately to real-world applications, in future wireless networks
Recommended from our members
Learning for Network Applications and Control
The emergence of new Internet applications and technologies have resulted in an increased complexity as well as a need for lower latency, higher bandwidth, and increased reliability. This ultimately results in an increased complexity of network operation and management. Manual management is not sufficient to meet these new requirements.
There is a need for data driven techniques to advance from manual management to autonomous management of network systems. One such technique, Machine Learning (ML), can use data to create models from hidden patterns in the data and make autonomous modifications. This approach has shown significant improvements in other domains (e.g., image recognition and natural language processing). The use of ML, along with advances in programmable control of Software- Defined Networks (SDNs), will alleviate manual network intervention and ultimately aid in autonomous network operations. However, realizing a data driven system that can not only understand what is happening in the network but also operate autonomously requires advances in the networking domain, as well as in ML algorithms.
In this thesis, we focus on developing ML-based network architectures and data driven net- working algorithms whose objective is to improve the performance and management of future networks and network applications. We focus on problems spanning across the network protocol stack from the application layer to the physical layer. We design algorithms and architectures that are motivated by measurements and observations in real world or experimental testbeds.
In Part I we focus on the challenge of monitoring and estimating user video quality of experience (QoE) of encrypted video traffic for network operators. We develop a system for REal-time QUality of experience metric detection for Encrypted Traffic, Requet. Requet uses a detection algorithm to identify video and audio chunks from the IP headers of encrypted traffic. Features extracted from the chunk statistics are used as input to a random forest ML model to predict QoE metrics. We evaluate Requet on a YouTube dataset we collected, consisting of diverse video assets delivered over various WiFi and LTE network conditions. We then extend Requet, and present a study on YouTube TV live streaming traffic behavior over WiFi and cellular networks covering a 9-month period. We observed pipelined chunk requests, a reduced buffer capacity, and a more stable chunk duration across various video resolutions compared to prior studies of on-demand streaming services. We develop a YouTube TV analysis tool using chunks statistics detected from the extracted data as input to a ML model to infer user QoE metrics.
In Part II we consider allocating end-to-end resources in cellular networks. Future cellular networks will utilize SDN and Network Function Virtualization (NFV) to offer increased flexibility for network infrastructure operators to utilize network resources. Combining these technologies with real-time network load prediction will enable efficient use of network resources. Specifically, we leverage a type of recurrent neural network, Long Short-Term Memory (LSTM) neural networks, for (i) service specific traffic load prediction for network slicing, and (ii) Baseband Unit (BBU) pool traffic load prediction in a 5G cloud Radio Access Network (RAN). We show that leveraging a system with better accuracy to predict service requirements results in a reduction of operation costs.
We focus on addressing the optical physical layer in Part III. Greater network flexibility through SDN and the growth of high bandwidth services are motivating faster service provisioning and capacity management in the optical layer. These functionalities require increased capacity along with rapid reconfiguration of network resources. Recent advances in optical hardware can enable a dramatic reduction in wavelength provisioning times in optical circuit switched networks. To support such operations, it is imperative to reconfigure the network without causing a drop in service quality to existing users. Therefore, we present a ML system that uses feedforward neural networks to predict the dynamic response of an optically circuit-switched 90-channel multi-hop Reconfigurable Optical Add-Drop Multiplexer (ROADM) network. We show that the trained deep neural network can recommend wavelength assignments for wavelength switching with minimal power excursions. We extend the performance of the ML system by implementing and testing a Hybrid Machine Learning (HML) model, which combines an analytical model with a neural network machine learning model to achieve higher prediction accuracy.
In Part IV, we use a data-driven approach to address the challenge of wireless content delivery in crowded areas. We present the Adaptive Multicast Services (AMuSe) system, whose objective is to enable scalable and adaptive WiFi multicast. Specifically, we develop an algorithm for dynamic selection of a subset of the multicast receivers as feedback nodes. Further, we describe the Multicast Dynamic Rate Adaptation (MuDRA) algorithm that utilizes AMuSe’s feedback to optimally tune the physical layer multicast rate. Our experimental evaluation of MuDRA on the ORBIT testbed shows that MuDRA outperforms other schemes and supports high throughput multicast flows to hundreds of nodes while meeting quality requirements. We leverage the lessons learned from AMuSe for WiFi and use order statistics to address the performance issues with LTE evolved Multimedia Broadcast/Multicast Service (eMBMS). We present the Dynamic Monitoring (DyMo) system which provides low-overhead and real-time feedback about eMBMS performance to be used for network optimization. We focus on the Quality of Service (QoS) Evaluation module and develop a Two-step estimation algorithm which can efficiently identify the SNR Threshold as a one time estimation. DyMo significantly outperforms alternative schemes based on the Order-Statistics estimation method which relies on random or periodic sampling