134 research outputs found

    Instantly Decodable Network Coding: From Centralized to Device-to-Device Communications

    Get PDF
    From its introduction to its quindecennial, network coding has built a strong reputation for enhancing packet recovery and achieving maximum information flow in both wired and wireless networks. Traditional studies focused on optimizing the throughput of the system by proposing elaborate schemes able to reach the network capacity. With the shift toward distributed computing on mobile devices, performance and complexity become both critical factors that affect the efficiency of a coding strategy. Instantly decodable network coding presents itself as a new paradigm in network coding that trades off these two aspects. This paper review instantly decodable network coding schemes by identifying, categorizing, and evaluating various algorithms proposed in the literature. The first part of the manuscript investigates the conventional centralized systems, in which all decisions are carried out by a central unit, e.g., a base-station. In particular, two successful approaches known as the strict and generalized instantly decodable network are compared in terms of reliability, performance, complexity, and packet selection methodology. The second part considers the use of instantly decodable codes in a device-to-device communication network, in which devices speed up the recovery of the missing packets by exchanging network coded packets. Although the performance improvements are directly proportional to the computational complexity increases, numerous successful schemes from both the performance and complexity viewpoints are identified

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC

    Self-Organized Coverage and Capacity Optimization for Cellular Mobile Networks

    Get PDF
    ï»żDie zur ErfĂŒllung der zu erwartenden Steigerungen ĂŒbertragener Datenmengen notwendige grĂ¶ĂŸere HeterogenitĂ€t und steigende Anzahl von Zellen werden in der Zukunft zu einer deutlich höheren KomplexitĂ€t bei Planung und Optimierung von Funknetzen fĂŒhren. ZusĂ€tzlich erfordern rĂ€umliche und zeitliche Änderungen der Lastverteilung eine dynamische Anpassung von Funkabdeckung und -kapazitĂ€t (Coverage-Capacity-Optimization, CCO). Aktuelle Planungs- und Optimierungsverfahren sind hochgradig von menschlichem Einfluss abhĂ€ngig, was sie zeitaufwĂ€ndig und teuer macht. Aus diesen Grnden treffen AnsĂ€tze zur besseren Automatisierung des Netzwerkmanagements sowohl in der Industrie, als auch der Forschung auf groes Interesse.Selbstorganisationstechniken (SO) haben das Potential, viele der aktuell durch Menschen gesteuerten AblĂ€ufe zu automatisieren. Ihnen wird daher eine zentrale Rolle bei der Realisierung eines einfachen und effizienten Netzwerkmanagements zugeschrieben. Die vorliegende Arbeit befasst sich mit selbstorganisierter Optimierung von Abdeckung und ÜbertragungskapazitĂ€t in Funkzellennetzwerken. Der Parameter der Wahl hierfĂŒr ist die Antennenneigung. Die zahlreichen vorhandenen AnsĂ€tze hierfĂŒr befassen sich mit dem Einsatz heuristischer Algorithmen in der Netzwerkplanung. Im Gegensatz dazu betrachtet diese Arbeit den verteilten Einsatz entsprechender Optimierungsverfahren in den betreffenden Netzwerkknoten. Durch diesen Ansatz können zentrale Fehlerquellen (Single Point of Failure) und Skalierbarkeitsprobleme in den kommenden heterogenen Netzwerken mit hoher Knotendichte vermieden werden.Diese Arbeit stellt einen "Fuzzy Q-Learning (FQL)"-basierten Ansatz vor, ein einfaches Maschinenlernverfahren mit einer effektiven Abstraktion kontinuierlicher Eingabeparameter. Das CCO-Problem wird als Multi-Agenten-Lernproblem modelliert, in dem jede Zelle versucht, ihre optimale Handlungsstrategie (d.h. die optimale Anpassung der Antennenneigung) zu lernen. Die entstehende Dynamik der Interaktion mehrerer Agenten macht die Fragestellung interessant. Die Arbeit betrachtet verschiedene Aspekte des Problems, wie beispielsweise den Unterschied zwischen egoistischen und kooperativen Lernverfahren, verteiltem und zentralisiertem Lernen, sowie die Auswirkungen einer gleichzeitigen Modifikation der Antennenneigung auf verschiedenen Knoten und deren Effekt auf die Lerneffizienz.Die LeistungsfĂ€higkeit der betrachteten Verfahren wird mittels eine LTE-Systemsimulators evaluiert. Dabei werden sowohl gleichmĂ€ĂŸig verteilte Zellen, als auch Zellen ungleicher GrĂ¶ĂŸe betrachtet. Die entwickelten AnsĂ€tze werden mit bekannten Lösungen aus der Literatur verglichen. Die Ergebnisse zeigen, dass die vorgeschlagenen Lösungen effektiv auf Änderungen im Netzwerk und der Umgebung reagieren können. Zellen stellen sich selbsttĂ€tig schnell auf AusfĂ€lle und Inbetriebnahmen benachbarter Systeme ein und passen ihre Antennenneigung geeignet an um die Gesamtleistung des Netzes zu verbessern. Die vorgestellten Lernverfahren erreichen eine bis zu 30 Prozent verbesserte Leistung als bereits bekannte AnsĂ€tze. Die Verbesserungen steigen mit der NetzwerkgrĂ¶ĂŸe.The challenging task of cellular network planning and optimization will become more and more complex because of the expected heterogeneity and enormous number of cells required to meet the traffic demands of coming years. Moreover, the spatio-temporal variations in the traffic patterns of cellular networks require their coverage and capacity to be adapted dynamically. The current network planning and optimization procedures are highly manual, which makes them very time consuming and resource inefficient. For these reasons, there is a strong interest in industry and academics alike to enhance the degree of automation in network management. Especially, the idea of Self-Organization (SO) is seen as the key to simplified and efficient cellular network management by automating most of the current manual procedures. In this thesis, we study the self-organized coverage and capacity optimization of cellular mobile networks using antenna tilt adaptations. Although, this problem is widely studied in literature but most of the present work focuses on heuristic algorithms for network planning tool automation. In our study we want to minimize this reliance on these centralized tools and empower the network elements for their own optimization. This way we can avoid the single point of failure and scalability issues in the emerging heterogeneous and densely deployed networks.In this thesis, we focus on Fuzzy Q-Learning (FQL), a machine learning technique that provides a simple learning mechanism and an effective abstraction level for continuous domain variables. We model the coverage-capacity optimization as a multi-agent learning problem where each cell is trying to learn its optimal action policy i.e. the antenna tilt adjustments. The network dynamics and the behavior of multiple learning agents makes it a highly interesting problem. We look into different aspects of this problem like the effect of selfish learning vs. cooperative learning, distributed vs. centralized learning as well as the effect of simultaneous parallel antenna tilt adaptations by multiple agents and its effect on the learning efficiency.We evaluate the performance of the proposed learning schemes using a system level LTE simulator. We test our schemes in regular hexagonal cell deployment as well as in irregular cell deployment. We also compare our results to a relevant learning scheme from literature. The results show that the proposed learning schemes can effectively respond to the network and environmental dynamics in an autonomous way. The cells can quickly respond to the cell outages and deployments and can re-adjust their antenna tilts to improve the overall network performance. Additionally the proposed learning schemes can achieve up to 30 percent better performance than the available scheme from literature and these gains increases with the increasing network size

    Data-Intensive Computing in Smart Microgrids

    Get PDF
    Microgrids have recently emerged as the building block of a smart grid, combining distributed renewable energy sources, energy storage devices, and load management in order to improve power system reliability, enhance sustainable development, and reduce carbon emissions. At the same time, rapid advancements in sensor and metering technologies, wireless and network communication, as well as cloud and fog computing are leading to the collection and accumulation of large amounts of data (e.g., device status data, energy generation data, consumption data). The application of big data analysis techniques (e.g., forecasting, classification, clustering) on such data can optimize the power generation and operation in real time by accurately predicting electricity demands, discovering electricity consumption patterns, and developing dynamic pricing mechanisms. An efficient and intelligent analysis of the data will enable smart microgrids to detect and recover from failures quickly, respond to electricity demand swiftly, supply more reliable and economical energy, and enable customers to have more control over their energy use. Overall, data-intensive analytics can provide effective and efficient decision support for all of the producers, operators, customers, and regulators in smart microgrids, in order to achieve holistic smart energy management, including energy generation, transmission, distribution, and demand-side management. This book contains an assortment of relevant novel research contributions that provide real-world applications of data-intensive analytics in smart grids and contribute to the dissemination of new ideas in this area

    Multi-Drone-Cell 3D Trajectory Planning and Resource Allocation for Drone-Assisted Radio Access Networks

    Get PDF
    Equipped with communication modules, drones can perform as drone-cells (DCs) that provide on-demand communication services to users in various scenarios, such as traffic monitoring, Internet of things (IoT) data collections, and temporal communication provisioning. As the aerial relay nodes between terrestrial users and base stations (BSs), DCs are leveraged to extend wireless connections for uncovered users of radio access networks (RAN), which forms the drone-assisted RAN (DA-RAN). In DA-RAN, the communication coverage, quality-of-service (QoS) performance and deployment flexibility can be improved due to the line-of-sight DC-to-ground (D2G) wireless links and the dynamic deployment capabilities of DCs. Considering the special mobility pattern, channel model, energy consumption, and other features of DCs, it is essential yet challenging to design the flying trajectories and resource allocation schemes for DA-RAN. In specific, given the emerging D2G communication models and dynamic deployment capability of DCs, new DC deployment strategies are required by DA-RAN. Moreover, to exploit the fully controlled mobility of DCs and promote the user fairness, the flying trajectories of DCs and the D2G communications must be jointly optimized. Further, to serve the high-mobility users (e.g. vehicular users) whose mobility patterns are hard to be modeled, both the trajectory planning and resource allocation schemes for DA-RAN should be re-designed to adapt to the variations of terrestrial traffic. To address the above challenges, in this thesis, we propose a DA-RAN architecture in which multiple DCs are leveraged to relay data between BSs and terrestrial users. Based on the theoretical analyses of the D2G communication, DC energy consumption, and DC mobility features, the deployment, trajectory planning and communication resource allocation of multiple DCs are jointly investigated for both quasi-static and high-mobility users. We first analyze the communication coverage, drone-to-BS (D2B) backhaul link quality, and optimal flying height of the DC according to the state-of-the-art drone-to-user (D2U) and D2B channel models. We then formulate the multi-DC three-dimensional (3D) deployment problem with the objective of maximizing the ratio of effectively covered users while guaranteeing D2B link qualities. To solve the problem, a per-drone iterated particle swarm optimization (DI-PSO) algorithm is proposed, which prevents the large particle searching space and the high violating probability of constraints existing in the pure PSO based algorithm. Simulations show that the DI-PSO algorithm can achieve higher coverage ratio with less complexity comparing to the pure PSO based algorithm. Secondly, to improve overall network performance and the fairness among edge and central users, we design 3D trajectories for multiple DCs in DA-RAN. The multi-DC 3D trajectory planning and scheduling is formulated as a mixed integer non-linear programming (MINLP) problem with the objective of maximizing the average D2U throughput. To address the non-convexity and NP-hardness of the MINLP problem due to the 3D trajectory, we first decouple the MINLP problem into multiple integer linear programming and quasi-convex sub-problems in which user association, D2U communication scheduling, horizontal trajectories and flying heights of DBSs are respectively optimized. Then, we design a multi-DC 3D trajectory planning and scheduling algorithm to solve the sub-problems iteratively based on the block coordinate descent (BCD) method. A k-means-based initial trajectory generation scheme and a search-based start slot scheduling scheme are also designed to improve network performance and control mutual interference between DCs, respectively. Compared with the static DBS deployment, the proposed trajectory planning scheme can achieve much lower average value and standard deviation of D2U pathloss, which indicate the improvements of network throughput and user fairness. Thirdly, considering the highly dynamic and uncertain environment composed by high-mobility users, we propose a hierarchical deep reinforcement learning (DRL) based multi-DC trajectory planning and resource allocation (HDRLTPRA) scheme for high-mobility users. The objective is to maximize the accumulative network throughput while satisfying user fairness, DC power consumption, and DC-to-ground link quality constraints. To address the high uncertainties of environment, we decouple the multi-DC TPRA problem into two hierarchical sub-problems, i.e., the higher-level global trajectory planning sub-problem and the lower-level local TPRA sub-problem. First, the global trajectory planning sub-problem is to address trajectory planning for multiple DCs in the RAN over a long time period. To solve the sub-problem, we propose a multi-agent DRL based global trajectory planning (MARL-GTP) algorithm in which the non-stationary state space caused by multi-DC environment is addressed by the multi-agent fingerprint technique. Second, based on the global trajectory planning results, the local TPRA (LTPRA) sub-problem is investigated independently for each DC to control the movement and transmit power allocation based on the real-time user traffic variations. A deep deterministic policy gradient based LTPRA (DDPG-LTPRA) algorithm is then proposed to solve the LTPRA sub-problem. With the two algorithms addressing both sub-problems at different decision granularities, the multi-DC TPRA problem can be resolved by the HDRLTPRA scheme. Simulation results show that 40% network throughput improvement can be achieved by the proposed HDRLTPRA scheme over the non-learning-based TPRA scheme. In summary, we have investigated the multi-DC 3D deployment, trajectory planning and communication resource allocation in DA-RAN considering different user mobility patterns in this thesis. The proposed schemes and theoretical results should provide useful guidelines for future research in DC trajectory planning, resource allocation, as well as the real deployment of DCs in complex environments with diversified users

    A Mechanism Design Approach to Bandwidth Allocation in Tactical Data Networks

    Get PDF
    The defense sector is undergoing a phase of rapid technological advancement, in the pursuit of its goal of information superiority. This goal depends on a large network of complex interconnected systems - sensors, weapons, soldiers - linked through a maze of heterogeneous networks. The sheer scale and size of these networks prompt behaviors that go beyond conglomerations of systems or `system-of-systems\u27. The lack of a central locus and disjointed, competing interests among large clusters of systems makes this characteristic of an Ultra Large Scale (ULS) system. These traits of ULS systems challenge and undermine the fundamental assumptions of today\u27s software and system engineering approaches. In the absence of a centralized controller it is likely that system users may behave opportunistically to meet their local mission requirements, rather than the objectives of the system as a whole. In these settings, methods and tools based on economics and game theory (like Mechanism Design) are likely to play an important role in achieving globally optimal behavior, when the participants behave selfishly. Against this background, this thesis explores the potential of using computational mechanisms to govern the behavior of ultra-large-scale systems and achieve an optimal allocation of constrained computational resources Our research focusses on improving the quality and accuracy of the common operating picture through the efficient allocation of bandwidth in tactical data networks among self-interested actors, who may resort to strategic behavior dictated by self-interest. This research problem presents the kind of challenges we anticipate when we have to deal with ULS systems and, by addressing this problem, we hope to develop a methodology which will be applicable for ULS system of the future. We build upon the previous works which investigate the application of auction-based mechanism design to dynamic, performance-critical and resource-constrained systems of interest to the defense community. In this thesis, we consider a scenario where a number of military platforms have been tasked with the goal of detecting and tracking targets. The sensors onboard a military platform have a partial and inaccurate view of the operating picture and need to make use of data transmitted from neighboring sensors in order to improve the accuracy of their own measurements. The communication takes place over tactical data networks with scarce bandwidth. The problem is compounded by the possibility that the local goals of military platforms might not be aligned with the global system goal. Such a scenario might occur in multi-flag, multi-platform military exercises, where the military commanders of each platform are more concerned with the well-being of their own platform over others. Therefore there is a need to design a mechanism that efficiently allocates the flow of data within the network to ensure that the resulting global performance maximizes the information gain of the entire system, despite the self-interested actions of the individual actors. We propose a two-stage mechanism based on modified strictly-proper scoring rules, with unknown costs, whereby multiple sensor platforms can provide estimates of limited precisions and the center does not have to rely on knowledge of the actual outcome when calculating payments. In particular, our work emphasizes the importance of applying robust optimization techniques to deal with the uncertainty in the operating environment. We apply our robust optimization - based scoring rules algorithm to an agent-based model framework of the combat tactical data network, and analyze the results obtained. Through the work we hope to demonstrate how mechanism design, perched at the intersection of game theory and microeconomics, is aptly suited to address one set of challenges of the ULS system paradigm - challenges not amenable to traditional system engineering approaches

    NOVEL USER-CENTRIC ARCHITECTURES FOR FUTURE GENERATION CELLULAR NETWORKS: DESIGN, ANALYSIS AND PERFORMANCE OPTIMIZATION

    Get PDF
    Ambitious targets for aggregate throughput, energy efficiency (EE) and ubiquitous user experience are propelling the advent of ultra-dense networks. Inter-cell interference and high energy consumption in an ultra-dense network are the prime hindering factors in pursuit of these goals. To address this challenge, we investigate the idea of transforming network design from being base station-centric to user-centric. To this end, we develop mathematical framework and analyze multiple variants of the user-centric networks, with the help of advanced scientific tools such as stochastic geometry, game theory, optimization theory and deep neural networks. We first present a user-centric radio access network (RAN) design and then propose novel base station association mechanisms by forming virtual dedicated cells around users scheduled for downlink. The design question that arises is what should the ideal size of the dedicated regions around scheduled users be? To answer this question, we follow a stochastic geometry based approach to quantify the area spectral efficiency (ASE) and energy efficiency (EE) of a user-centric Cloud RAN architecture. Observing that the two efficiency metrics have conflicting optimal user-centric cell sizes, we propose a game theoretic self-organizing network (GT-SON) framework that can orchestrate the network between ASE and EE focused operational modes in real-time in response to changes in network conditions and the operator's revenue model, to achieve a Pareto optimal solution. The designed model is shown to outperform base-station centric design in terms of both ASE and EE in dense deployment scenarios. Taking this user-centric approach as a baseline, we improve the ASE and EE performance by introducing flexibility in the dimensions of the user-centric regions as a function of data requirement for each device. So instead of optimizing the network-wide ASE or EE, each user device competes for a user-centric region based on its data requirements. This competition is modeled via an evolutionary game and a Vickrey-Clarke-Groves auction. The data requirement based flexibility in the user-centric RAN architecture not only improves the ASE and EE, but also reduces the scheduling wait time per user. Offloading dense user hotspots to low range mmWave cells promises to meet the enhance mobile broadband requirement of 5G and beyond. To investigate how the three key enablers; i.e. user-centric virtual cell design, ultra-dense deployments and mmWave communication; are integrated in a multi-tier Stienen geometry based user-centric architecture. Taking into account the characteristics of mmWave propagation channel such as blockage and fading, we develop a statistical framework for deriving the coverage probability of an arbitrary user equipment scheduled within the proposed architecture. A key advantage observed through this architecture is significant reduction in the scheduling latency as compared to the baseline user-centric model. Furthermore, the interplay between certain system design parameters was found to orchestrate the ASE-EE tradeoff within the proposed network design. We extend this work by framing a stochastic optimization problem over the design parameters for a Pareto optimal ASE-EE tradeoff with random placements of mobile users, macro base stations and mmWave cells within the network. To solve this optimization problem, we follow a deep learning approach to estimate optimal design parameters in real-time complexity. Our results show that if the deep learning model is trained with sufficient data and tuned appropriately, it yields near-optimal performance while eliminating the issue of long processing times needed for system-wide optimization. The contributions of this dissertation have the potential to cause a paradigm shift from the reactive cell-centric network design to an agile user-centric design that enables real-time optimization capabilities, ubiquitous user experience, higher system capacity and improved network-wide energy efficiency

    Mathematical optimization techniques for demand management in smart grids

    Get PDF
    The electricity supply industry has been facing significant challenges in terms of meeting the projected demand for energy, environmental issues, security, reliability and integration of renewable energy. Currently, most of the power grids are based on many decades old vertical hierarchical infrastructures where the electric power flows in one direction from the power generators to the consumer side and the grid monitoring information is handled only at the operation side. It is generally believed that a fundamental evolution in electric power generation and supply system is required to make the grids more reliable, secure and efficient. This is generally recognised as the development of smart grids. Demand management is the key to the operational efficiency and reliability of smart grids. Facilitated by the two-way information flow and various optimization mechanisms, operators benefit from real time dynamic load monitoring and control while consumers benefit from optimised use of energy. In this thesis, various mathematical optimization techniques and game theoretic frameworks have been proposed for demand management in order to achieve efficient home energy consumption scheduling and optimal electric vehicle (EV) charging. A consumption scheduling technique is proposed to minimise the peak consumption load. The proposed technique is able to schedule the optimal operation time for appliances according to the power consumption patterns of the individual appliances. A game theoretic consumption optimization framework is proposed to manage the scheduling of appliances of multiple residential consumers in a decentralised manner, with the aim of achieving minimum cost of energy for consumers. The optimization incorporates integration of locally generated and stored renewable energy in order to minimise dependency on conventional energy. In addition to the appliance scheduling, a mean field game theoretic optimization framework is proposed for electric vehicles to manage their charging. In particular, the optimization considers a charging station where a large number of EVs are charged simultaneously during a flexible period of time. The proposed technique provides the EVs an optimal charging strategy in order to minimise the cost of charging. The performances of all these new proposed techniques have been demonstrated using Matlab based simulation studies
    • 

    corecore