1,488 research outputs found

    Learning and Management for Internet-of-Things: Accounting for Adaptivity and Scalability

    Get PDF
    Internet-of-Things (IoT) envisions an intelligent infrastructure of networked smart devices offering task-specific monitoring and control services. The unique features of IoT include extreme heterogeneity, massive number of devices, and unpredictable dynamics partially due to human interaction. These call for foundational innovations in network design and management. Ideally, it should allow efficient adaptation to changing environments, and low-cost implementation scalable to massive number of devices, subject to stringent latency constraints. To this end, the overarching goal of this paper is to outline a unified framework for online learning and management policies in IoT through joint advances in communication, networking, learning, and optimization. From the network architecture vantage point, the unified framework leverages a promising fog architecture that enables smart devices to have proximity access to cloud functionalities at the network edge, along the cloud-to-things continuum. From the algorithmic perspective, key innovations target online approaches adaptive to different degrees of nonstationarity in IoT dynamics, and their scalable model-free implementation under limited feedback that motivates blind or bandit approaches. The proposed framework aspires to offer a stepping stone that leads to systematic designs and analysis of task-specific learning and management schemes for IoT, along with a host of new research directions to build on.Comment: Submitted on June 15 to Proceeding of IEEE Special Issue on Adaptive and Scalable Communication Network

    DESIGN AND IMPLEMENTATION OF INFORMATION PATHS IN DENSE WIRELESS SENSOR NETWORKS

    Get PDF
    In large-scale sensor networks with monitoring applications, sensor nodes are responsible to send periodic reports to the destination which is located far away from the area to be monitored. We model this area (referred to as the distributed source) with a positive load density function which determines the total rate of traffic generated inside any closed contour within the area. With tight limitations in energy consumption of wireless sensors and the many-to-one nature of communications in wireless sensor networks, the traditional definition of connectivity in graph theory does not seem to be sufficient to satisfy the requirements of sensor networks. In this work, a new notion of connectivity (called implementability) is defined which represents the ability of sensor nodes to relay traffic along a given direction field, referred to as information flow vector field D\vec{D}. The magnitude of information flow is proportional to the traffic flux (per unit length) passing through any point in the network, and its direction is toward the flow of traffic. The flow field may be obtained from engineering knowledge or as a solution to an optimization problem. In either case, information flow flux lines represent a set of abstract paths (not constrained by the actual location of sensor nodes) which can be used for data transmission to the destination. In this work, we present conditions to be placed on D\vec{D} such that the resulting optimal vector field generates a desirable set of paths. In a sensor network with a given irrotational flow field D(x,y)\vec{D}(x,y), we show that a density of n(x,y)=O(D(x,y)2)n(x,y)=O(|\vec{D}(x,y)|^2) sensor nodes is not sufficient to implement the flow field as D|\vec{D}| scales linearly to infinity. On the other hand, by increasing the density of wireless nodes to n(x,y)=O(D(x,y)2logD(x,y))n(x,y)=O(|\vec{D}(x,y)|^2 \log |\vec{D}(x,y)|), the flow field becomes implementable. Implementability requires more nodes than simple connectivity. However, results on connectivity are based on the implicit assumption of exhaustively searching all possible routes which contradicts the tight limitation of energy in sensor networks. We propose a joint MAC and routing protocol to forward traffic along the flow field. The proposed tier-based scheme can be further exploited to build lightweight protocol stacks which meet the specific requirements of dense sensor networks. We also investigate buffer scalability of sensor nodes routing along flux lines of a given irrotational vector field, and show that nodes distributed according to the sufficient bound provided above can relay traffic from the source to the destination with sensor nodes having limited buffer space. This is particularly interesting for dense wireless sensor networks where nodes are assumed to have very limited resources

    Reconfigurable Intelligent Surface Aided Cellular Networks With Device-to-Device Users

    Get PDF

    An efficient genetic algorithm for large-scale transmit power control of dense and robust wireless networks in harsh industrial environments

    Get PDF
    The industrial wireless local area network (IWLAN) is increasingly dense, due to not only the penetration of wireless applications to shop floors and warehouses, but also the rising need of redundancy for robust wireless coverage. Instead of simply powering on all access points (APs), there is an unavoidable need to dynamically control the transmit power of APs on a large scale, in order to minimize interference and adapt the coverage to the latest shadowing effects of dominant obstacles in an industrial indoor environment. To fulfill this need, this paper formulates a transmit power control (TPC) model that enables both powering on/off APs and transmit power calibration of each AP that is powered on. This TPC model uses an empirical one-slope path loss model considering three-dimensional obstacle shadowing effects, to enable accurate yet simple coverage prediction. An efficient genetic algorithm (GA), named GATPC, is designed to solve this TPC model even on a large scale. To this end, it leverages repair mechanism-based population initialization, crossover and mutation, parallelism as well as dedicated speedup measures. The GATPC was experimentally validated in a small-scale IWLAN that is deployed a real industrial indoor environment. It was further numerically demonstrated and benchmarked on both small- and large-scales, regarding the effectiveness and the scalability of TPC. Moreover, sensitivity analysis was performed to reveal the produced interference and the qualification rate of GATPC in function of varying target coverage percentage as well as number and placement direction of dominant obstacles. (C) 2018 Elsevier B.V. All rights reserved

    Intelligent and Efficient Ultra-Dense Heterogeneous Networks for 5G and Beyond

    Get PDF
    Ultra-dense heterogeneous network (HetNet), in which densified small cells overlaying the conventional macro-cells, is a promising technique for the fifth-generation (5G) mobile network. The dense and multi-tier network architecture is able to support the extensive data traffic and diverse quality of service (QoS) but meanwhile arises several challenges especially on the interference coordination and resource management. In this thesis, three novel network schemes are proposed to achieve intelligent and efficient operation based on the deep learning-enabled network awareness. Both optimization and deep learning methods are developed to achieve intelligent and efficient resource allocation in these proposed network schemes. To improve the cost and energy efficiency of ultra-dense HetNets, a hotspot prediction based virtual small cell (VSC) network is proposed. A VSC is formed only when the traffic volume and user density are extremely high. We leverage the feature extraction capabilities of deep learning techniques and exploit a long-short term memory (LSTM) neural network to predict potential hotspots and form VSC. Large-scale antenna array enabled hybrid beamforming is also adaptively adjusted for highly directional transmission to cover these VSCs. Within each VSC, one user equipment (UE) is selected as a cell head (CH), which collects the intra-cell traffic using the unlicensed band and relays the aggregated traffic to the macro-cell base station (MBS) in the licensed band. The inter-cell interference can thus be reduced, and the spectrum efficiency can be improved. Numerical results show that proposed VSCs can reduce 55%55\% power consumption in comparison with traditional small cells. In addition to the smart VSCs deployment, a novel multi-dimensional intelligent multiple access (MD-IMA) scheme is also proposed to achieve stringent and diverse QoS of emerging 5G applications with disparate resource constraints. Multiple access (MA) schemes in multi-dimensional resources are adaptively scheduled to accommodate dynamic QoS requirements and network states. The MD-IMA learns the integrated-quality-of-system-experience (I-QoSE) by monitoring and predicting QoS through the LSTM neural network. The resource allocation in the MD-IMA scheme is formulated as an optimization problem to maximize the I-QoSE as well as minimize the non-orthogonality (NO) in view of implementation constraints. In order to solve this problem, both model-based optimization algorithms and model-free deep reinforcement learning (DRL) approaches are utilized. Simulation results demonstrate that the achievable I-QoSE gain of MD-IMA over traditional MA is 15%15\% - 18%18\%. In the final part of the thesis, a Software-Defined Networking (SDN) enabled 5G-vehicle ad hoc networks (VANET) is designed to support the growing vehicle-generated data traffic. In this integrated architecture, to reduce the signaling overhead, vehicles are clustered under the coordination of SDN and one vehicle in each cluster is selected as a gateway to aggregate intra-cluster traffic. To ensure the capacity of the trunk-link between the gateway and macro base station, a Non-orthogonal Multiplexed Modulation (NOMM) scheme is proposed to split aggregated data stream into multi-layers and use sparse spreading code to partially superpose the modulated symbols on several resource blocks. The simulation results show that the energy efficiency performance of proposed NOMM is around 1.5-2 times than that of the typical orthogonal transmission scheme

    Power-Aware Planning and Design for Next Generation Wireless Networks

    Get PDF
    Mobile network operators have witnessed a transition from being voice dominated to video/data domination, which leads to a dramatic traffic growth over the past decade. With the 4G wireless communication systems being deployed in the world most recently, the fifth generation (5G) mobile and wireless communica- tion technologies are emerging into research fields. The fast growing data traffic volume and dramatic expansion of network infrastructures will inevitably trigger tremendous escalation of energy consumption in wireless networks, which will re- sult in the increase of greenhouse gas emission and pose ever increasing urgency on the environmental protection and sustainable network development. Thus, energy-efficiency is one of the most important rules that 5G network planning and design should follow. This dissertation presents power-aware planning and design for next generation wireless networks. We study network planning and design problems in both offline planning and online resource allocation. We propose approximation algo- rithms and effective heuristics for various network design scenarios, with different wireless network setups and different power saving optimization objectives. We aim to save power consumption on both base stations (BSs) and user equipments (UEs) by leveraging wireless relay placement, small cell deployment, device-to- device communications and base station consolidation. We first study a joint signal-aware relay station placement and power alloca- tion problem with consideration for multiple related physical constraints such as channel capacity, signal to noise ratio requirement of subscribers, relay power and network topology in multihop wireless relay networks. We present approximation schemes which first find a minimum number of relay stations, using maximum transmit power, to cover all the subscribers meeting each SNR requirement, and then ensure communications between any subscriber and a base station by ad- justing the transmit power of each relay station. In order to save power on BS, we propose a practical solution and offer a new perspective on implementing green wireless networks by embracing small cell networks. Many existing works have proposed to schedule base station into sleep to save energy. However, in reality, it is very difficult to shut down and reboot BSs frequently due to nu- merous technical issues and performance requirements. Instead of putting BSs into sleep, we tactically reduce the coverage of each base station, and strategi- cally place microcells to offload the traffic transmitted to/from BSs to save total power consumption. In online resource allocation, we aim to save tranmit power of UEs by en- abling device-to-device (D2D) communications in OFDMA-based wireless net- works. Most existing works on D2D communications either targeted CDMA- based single-channel networks or aimed at maximizing network throughput. We formally define an optimization problem based on a practical link data rate model, whose objective is to minimize total power consumption while meeting user data rate requirements. We propose to solve it using a joint optimization approach by presenting two effective and efficient algorithms, which both jointly determine mode selection, channel allocation and power assignment. In the last part of this dissertation, we propose to leverage load migration and base station consolidation for green communications and consider a power- efficient network planning problem in virtualized cognitive radio networks with the objective of minimizing total power consumption while meeting traffic load demand of each Mobile Virtual Network Operator (MVNO). First we present a Mixed Integer Linear Programming (MILP) to provide optimal solutions. Then we present a general optimization framework to guide algorithm design, which solves two subproblems, channel assignment and load allocation, in sequence. In addition, we present an effective heuristic algorithm that jointly solves the two subproblems. Numerical results are presented to confirm the theoretical analysis of our schemes, and to show strong performances of our solutions, compared to several baseline methods

    Sparse Signal Processing Concepts for Efficient 5G System Design

    Full text link
    As it becomes increasingly apparent that 4G will not be able to meet the emerging demands of future mobile communication systems, the question what could make up a 5G system, what are the crucial challenges and what are the key drivers is part of intensive, ongoing discussions. Partly due to the advent of compressive sensing, methods that can optimally exploit sparsity in signals have received tremendous attention in recent years. In this paper we will describe a variety of scenarios in which signal sparsity arises naturally in 5G wireless systems. Signal sparsity and the associated rich collection of tools and algorithms will thus be a viable source for innovation in 5G wireless system design. We will discribe applications of this sparse signal processing paradigm in MIMO random access, cloud radio access networks, compressive channel-source network coding, and embedded security. We will also emphasize important open problem that may arise in 5G system design, for which sparsity will potentially play a key role in their solution.Comment: 18 pages, 5 figures, accepted for publication in IEEE Acces
    corecore