9 research outputs found

    Towards Fast-Convergence, Low-Delay and Low-Complexity Network Optimization

    Full text link
    Distributed network optimization has been studied for well over a decade. However, we still do not have a good idea of how to design schemes that can simultaneously provide good performance across the dimensions of utility optimality, convergence speed, and delay. To address these challenges, in this paper, we propose a new algorithmic framework with all these metrics approaching optimality. The salient features of our new algorithm are three-fold: (i) fast convergence: it converges with only O(log⁑(1/ϡ))O(\log(1/\epsilon)) iterations that is the fastest speed among all the existing algorithms; (ii) low delay: it guarantees optimal utility with finite queue length; (iii) simple implementation: the control variables of this algorithm are based on virtual queues that do not require maintaining per-flow information. The new technique builds on a kind of inexact Uzawa method in the Alternating Directional Method of Multiplier, and provides a new theoretical path to prove global and linear convergence rate of such a method without requiring the full rank assumption of the constraint matrix

    Learning Aided Optimization for Energy Harvesting Devices with Outdated State Information

    Full text link
    This paper considers utility optimal power control for energy harvesting wireless devices with a finite capacity battery. The distribution information of the underlying wireless environment and harvestable energy is unknown and only outdated system state information is known at the device controller. This scenario shares similarity with Lyapunov opportunistic optimization and online learning but is different from both. By a novel combination of Zinkevich's online gradient learning technique and the drift-plus-penalty technique from Lyapunov opportunistic optimization, this paper proposes a learning-aided algorithm that achieves utility within O(Ο΅)O(\epsilon) of the optimal, for any desired Ο΅>0\epsilon>0, by using a battery with an O(1/Ο΅)O(1/\epsilon) capacity. The proposed algorithm has low complexity and makes power investment decisions based on system history, without requiring knowledge of the system state or its probability distribution.Comment: This version extends v1 (our INFOCOM 2018 paper): (1) add a new section (Section V) to study the case where utility functions are non-i.i.d. arbitrarily varying (2) add more simulation experiments. The current version is published in IEEE/ACM Transactions on Networkin

    Timely-Throughput Optimal Scheduling with Prediction

    Full text link
    Motivated by the increasing importance of providing delay-guaranteed services in general computing and communication systems, and the recent wide adoption of learning and prediction in network control, in this work, we consider a general stochastic single-server multi-user system and investigate the fundamental benefit of predictive scheduling in improving timely-throughput, being the rate of packets that are delivered to destinations before their deadlines. By adopting an error rate-based prediction model, we first derive a Markov decision process (MDP) solution to optimize the timely-throughput objective subject to an average resource consumption constraint. Based on a packet-level decomposition of the MDP, we explicitly characterize the optimal scheduling policy and rigorously quantify the timely-throughput improvement due to predictive-service, which scales as Θ(p[C1(aβˆ’amax⁑q)pβˆ’qρτ+C2(1βˆ’1p)](1βˆ’ΟD))\Theta(p\left[C_{1}\frac{(a-a_{\max}q)}{p-q}\rho^{\tau}+C_{2}(1-\frac{1}{p})\right](1-\rho^{D})), where a,amax⁑,ρ∈(0,1),C1>0,C2β‰₯0a, a_{\max}, \rho\in(0, 1), C_1>0, C_2\ge0 are constants, pp is the true-positive rate in prediction, qq is the false-negative rate, Ο„\tau is the packet deadline and DD is the prediction window size. We also conduct extensive simulations to validate our theoretical findings. Our results provide novel insights into how prediction and system parameters impact performance and provide useful guidelines for designing predictive low-latency control algorithms.Comment: 14 pages, 7 figure

    Resource Management and Backhaul Routing in Millimeter-Wave IAB Networks Using Deep Reinforcement Learning

    Get PDF
    Thesis (PhD (Electronic Engineering))--University of Pretoria, 2023..The increased densification of wireless networks has led to the development of integrated access and backhaul (IAB) networks. In this thesis, deep reinforcement learning was applied to solve resource management and backhaul routing problems in millimeter-wave IAB networks. In the research work, a resource management solution that aims to avoid congestion for access users in an IAB network was proposed and implemented. The proposed solution applies deep reinforcement learning to learn an optimized policy that aims to achieve effective resource allocation whilst minimizing congestion and satisfying the user requirements. In addition, a deep reinforcement learning-based backhaul adaptation strategy that leverages a recursive discrete choice model was implemented in simulation. Simulation results where the proposed algorithms were compared with two baseline methods showed that the proposed scheme provides better throughput and delay performance.Sentech Chair in Broadband Wireless Multimedia Communications.Electrical, Electronic and Computer EngineeringPhD (Electronic Engineering)Unrestricte

    Resource management for cost-effective cloud and edge systems

    Get PDF
    With the booming of Internet-based and cloud/edge computing applications and services,datacenters hosting these services have become ubiquitous in every sector of our economy which leads to tremendous research opportunities. Specifically, in cloud computing, all data are gathered and processed in centralized cloud datacenters whereas in edge computing, the frontier of data and services is pushed away from the centralized cloud to the edge of the network. By fusing edge computing with cloud computing, the Internet companies and end users can benefit from their respective merits, abundant computation and storage resources from cloud computing, and the data-gathering potential of edge computing. However, resource management in cloud and edge systems is complicated and challenging due to the large scale of cloud datacenters, diverse interconnected resource types, unpredictable generated workloads, and a range of performance objectives. It necessitates the systematic modeling of cloud and edge systems to achieve desired performance objectives.This dissertation presents a holistic system modeling and novel solution methodology to effectivelysolve the optimization problems formulated in three cloud and edge architectures: 1) cloud computing in colocation datacenters; 2) cloud computing in geographically distributed datacenters; 3) UAV-enabled mobile edge computing. First, we study resource management with the goal of overall cost minimization in the context of cloud computing systems. A cooperative game is formulated to model the scenario where a multi-tenant colocation datacenter collectively procures electricity in the wholesale electricity market. Then, a two-stage stochastic programming is formulated to model the scenario where geographically distributed datacenters dispatch workload and procure electricity in the multi-timescale electricity markets. Last, we extend our focus on joint task offloading and resource management with the goal of overall cost minimization in the context of edge computing systems, where edge nodes with computing capabilities are deployed in proximity to end users. A nonconvex optimization problem is formulated in the UAV-enabled mobile edge computing system with the goal of minimizing both energy consumption for computation and task offloading and system response delay. Furthermore, a novel hybrid algorithm that unifies differential evolution and successive convex approximation is proposed to efficiently solve the problem with improved performance.This dissertation addresses several fundamental issues related to resource management incloud and edge computing systems that will further in-depth investigations to improve costeffective performance. The advanced modeling and efficient algorithms developed in this research enable the system operator to make optimal and strategic decisions in resource allocation and task offloading for cost savings
    corecore