50 research outputs found

    Learning Optimal Resource Allocations In Wireless Systems

    Get PDF
    The goal of this thesis is to develop a learning framework for solving resource allocation problems in wireless systems. Resource allocation problems are as widespread as they are challenging to solve, in part due to the limitations in finding accurate models for these complex systems. While both exact and heuristic approaches have been developed for select problems of interest, as these systems grow in complexity to support applications in Internet of Things and autonomous behavior, it becomes necessary to have a more generic solution framework. The use of statistical machine learning is a natural choice not only in its ability to develop solutions without reliance on models, but also due to the fact that a resource allocation problem takes the form of a statistical regression problem. The second and third chapters of this thesis begin by presenting initial applications of machine learning ideas to solve problems in wireless control systems. Wireless control systems are a particular class of resource allocation problems that are a fundamental element of IoT applications. In Chapter 2, we consider the setting of controlling plants over non-stationary wireless channels. We draw a connection between the resource allocation problem and empirical risk minimization to develop convex optimization algorithms that can adapt to non-stationarities in the wireless channel. In Chapter 3, we consider the setting of controlling plants over a latency-constrained wireless channel. For this application, we utilize ideas of control-awareness in wireless scheduling to derive an assignment problem to determine optimal, latency-aware schedules. The core framework of the thesis is then presented in the fourth and fifth chapters. In Chapter 4, we formally draw a connection between a generic class of wireless resource allocation problems and constrained statistical learning, or regression. From here, this inspires the use of machine learning models to parameterize the resource allocation problem. To train the parameters of the learning model, we first establish a bounded duality gap result of the constrained optimization problem, and subsequently present a primal-dual learning algorithm. While any learning parameterization can be used, in this thesis we focus our attention on deep neural networks (DNNs). While fully connected networks can be represent many functions, they are impractical to train for large scale systems. In Chapter 5, we tackle the parallel problem in our wireless framework of developing particular learning parameterizations, or deep learning architectures, that are well suited for representing wireless resource allocation policies. Due to the graph structure inherent in wireless networks, we propose the use of graph convolutional neural networks to parameterize the resource allocation policies. Before concluding remarks and future work, in Chapter 6 we present initial results on applying the learning framework of the previous two chapters in the setting of scheduling transmissions for low-latency wireless control systems. We formulate a control-aware scheduling problem that takes the form of the constrained learning problem and apply the primal-dual learning algorithm to train the graph neural network

    Decentralized Gradient-Free Methods for Stochastic Non-Smooth Non-Convex Optimization

    Full text link
    We consider decentralized gradient-free optimization of minimizing Lipschitz continuous functions that satisfy neither smoothness nor convexity assumption. We propose two novel gradient-free algorithms, the Decentralized Gradient-Free Method (DGFM) and its variant, the Decentralized Gradient-Free Method+^+ (DGFM+^{+}). Based on the techniques of randomized smoothing and gradient tracking, DGFM requires the computation of the zeroth-order oracle of a single sample in each iteration, making it less demanding in terms of computational resources for individual computing nodes. Theoretically, DGFM achieves a complexity of O(d3/2δ−1ε−4)\mathcal O(d^{3/2}\delta^{-1}\varepsilon ^{-4}) for obtaining an (δ,ε)(\delta,\varepsilon)-Goldstein stationary point. DGFM+^{+}, an advanced version of DGFM, incorporates variance reduction to further improve the convergence behavior. It samples a mini-batch at each iteration and periodically draws a larger batch of data, which improves the complexity to O(d3/2δ−1ε−3)\mathcal O(d^{3/2}\delta^{-1} \varepsilon^{-3}). Moreover, experimental results underscore the empirical advantages of our proposed algorithms when applied to real-world datasets

    Distributed Algorithms for the Optimal Design of Wireless Networks

    Get PDF
    This thesis studies the problem of optimal design of wireless networks whose operating points such as powers, routes and channel capacities are solutions for an optimization problem. Different from existing work that rely on global channel state information (CSI), we focus on distributed algorithms for the optimal wireless networks where terminals only have access to locally available CSI. To begin with, we study random access channels where terminals acquire instantaneous local CSI but do not know the probability distribution of the channel. We develop adaptive scheduling and power control algorithms and show that the proposed algorithm almost surely maximizes a proportional fair utility while adhering to instantaneous and average power constraints. Then, these results are extended to random access multihop wireless networks. In this case, the associated optimization problem is neither convex nor amenable to distributed implementation, so a problem approximation is introduced which allows us to decompose it into local subproblems in the dual domain. The solution method based on stochastic subgradient descent leads to an architecture composed of layers and layer interfaces. With limited amount of message passing among terminals and small computational cost, the proposed algorithm converges almost surely in an ergodic sense. Next, we study the optimal transmission over wireless channels with imperfect CSI available at the transmitter side. To reduce the likelihood of packet losses due to the mismatch between channel estimates and actual channel values, a backoff function is introduced to enforce the selection of more conservative coding modes. Joint determination of optimal power allocations and backoff functions is a nonconvex stochastic optimization problem with infinitely many variables. Exploiting the resulting equivalence between primal and dual problems, we show that optimal power allocations and channel backoff functions are uniquely determined by optimal dual variables and develop algorithms to find the optimal solution. Finally, we study the optimal design of wireless network from a game theoretical perspective. In particular, we formulate the problem as a Bayesian game in which each terminal maximizes the expected utility based on its belief about the network state. We show that optimal solutions for two special cases, namely FDMA and RA, are equilibrium points of the game. Therefore, the proposed game theoretic formulation can be regarded as general framework for optimal design of wireless networks. Furthermore, cognitive access algorithms are developed to find solutions to the game approximately

    A Bandit Learning Method for Continuous Games under Feedback Delays with Residual Pseudo-Gradient Estimate

    Full text link
    Learning in multi-player games can model a large variety of practical scenarios, where each player seeks to optimize its own local objective function, which at the same time relies on the actions taken by others. Motivated by the frequent absence of first-order information such as partial gradients in solving local optimization problems and the prevalence of asynchronicity and feedback delays in multi-agent systems, we introduce a bandit learning algorithm, which integrates mirror descent, residual pseudo-gradient estimates, and the priority-based feedback utilization strategy, to contend with these challenges. We establish that for pseudo-monotone plus games, the actual sequences of play generated by the proposed algorithm converge a.s. to critical points. Compared with the existing method, the proposed algorithm yields more consistent estimates with less variation and allows for more aggressive choices of parameters. Finally, we illustrate the validity of the proposed algorithm through a thermal load management problem of building complexes

    Fast, Distributed Optimization Strategies for Resource Allocation in Networks

    Get PDF
    Many challenges in network science and engineering today arise from systems composed of many individual agents interacting over a network. Such problems range from humans interacting with each other in social networks to computers processing and exchanging information over wired or wireless networks. In any application where information is spread out spatially, solutions must address information aggregation in addition to the decision process itself. Intelligently addressing the trade off between information aggregation and decision accuracy is fundamental to finding solutions quickly and accurately. Network optimization challenges such as these have generated a lot of interest in distributed optimization methods. The field of distributed optimization deals with iterative methods which perform calculations using locally available information. Early methods such as subgradient descent suffer very slow convergence rates because the underlying optimization method is a first order method. My work addresses problems in the area of network optimization and control with an emphasis on accelerating the rate of convergence by using a faster underlying optimization method. In the case of convex network flow optimization, the problem is transformed to the dual domain, moving the equality constraints which guarantee flow conservation into the objective. The Newton direction can be computed locally by using a consensus iteration to solve a Poisson equation, but this requires a lot of communication between neighboring nodes. Accelerated Dual Descent (ADD) is an approximate Newton method, which significantly reduces the communication requirement. Defining a stochastic version of the convex network flow problem with edge capacities yields a problem equivalent to the queue stability problem studied in the backpressure literature. Accelerated Backpressure (ABP) is developed to solve the queue stabilization problem. A queue reduction method is introduced by merging ideas from integral control and momentum based optimization
    corecore