1,706 research outputs found

    A Survey on Delay-Aware Resource Control for Wireless Systems --- Large Deviation Theory, Stochastic Lyapunov Drift and Distributed Stochastic Learning

    Full text link
    In this tutorial paper, a comprehensive survey is given on several major systematic approaches in dealing with delay-aware control problems, namely the equivalent rate constraint approach, the Lyapunov stability drift approach and the approximate Markov Decision Process (MDP) approach using stochastic learning. These approaches essentially embrace most of the existing literature regarding delay-aware resource control in wireless systems. They have their relative pros and cons in terms of performance, complexity and implementation issues. For each of the approaches, the problem setup, the general solution and the design methodology are discussed. Applications of these approaches to delay-aware resource allocation are illustrated with examples in single-hop wireless networks. Furthermore, recent results regarding delay-aware multi-hop routing designs in general multi-hop networks are elaborated. Finally, the delay performance of the various approaches are compared through simulations using an example of the uplink OFDMA systems.Comment: 58 pages, 8 figures; IEEE Transactions on Information Theory, 201

    Distributed stochastic optimization with large delays

    Get PDF
    One of the most widely used methods for solving large-scale stochastic optimization problems is distributed asynchronous stochastic gradient descent (DASGD), a family of algorithms that result from parallelizing stochastic gradient descent on distributed computing architectures (possibly) asychronously. However, a key obstacle in the efficient implementation of DASGD is the issue of delays: when a computing node contributes a gradient update, the global model parameter may have already been updated by other nodes several times over, thereby rendering this gradient information stale. These delays can quickly add up if the computational throughput of a node is saturated, so the convergence of DASGD may be compromised in the presence of large delays. Our first contribution is that, by carefully tuning the algorithm's step-size, convergence to the critical set is still achieved in mean square, even if the delays grow unbounded at a polynomial rate. We also establish finer results in a broad class of structured optimization problems (called variationally coherent), where we show that DASGD converges to a global optimum with probability 11 under the same delay assumptions. Together, these results contribute to the broad landscape of large-scale non-convex stochastic optimization by offering state-of-the-art theoretical guarantees and providing insights for algorithm design.Comment: 41 pages, 8 figures; to be published in Mathematics of Operations Researc

    Distributed stochastic optimization via matrix exponential learning

    Get PDF
    In this paper, we investigate a distributed learning scheme for a broad class of stochastic optimization problems and games that arise in signal processing and wireless communications. The proposed algorithm relies on the method of matrix exponential learning (MXL) and only requires locally computable gradient observations that are possibly imperfect and/or obsolete. To analyze it, we introduce the notion of a stable Nash equilibrium and we show that the algorithm is globally convergent to such equilibria - or locally convergent when an equilibrium is only locally stable. We also derive an explicit linear bound for the algorithm's convergence speed, which remains valid under measurement errors and uncertainty of arbitrarily high variance. To validate our theoretical analysis, we test the algorithm in realistic multi-carrier/multiple-antenna wireless scenarios where several users seek to maximize their energy efficiency. Our results show that learning allows users to attain a net increase between 100% and 500% in energy efficiency, even under very high uncertainty.Comment: 31 pages, 3 figure

    Power Aware Wireless File Downloading: A Constrained Restless Bandit Approach

    Full text link
    This paper treats power-aware throughput maximization in a multi-user file downloading system. Each user can receive a new file only after its previous file is finished. The file state processes for each user act as coupled Markov chains that form a generalized restless bandit system. First, an optimal algorithm is derived for the case of one user. The algorithm maximizes throughput subject to an average power constraint. Next, the one-user algorithm is extended to a low complexity heuristic for the multi-user problem. The heuristic uses a simple online index policy and its effectiveness is shown via simulation. For simple 3-user cases where the optimal solution can be computed offline, the heuristic is shown to be near-optimal for a wide range of parameters

    Distributed Online Modified Greedy Algorithm for Networked Storage Operation under Uncertainty

    Full text link
    The integration of intermittent and stochastic renewable energy resources requires increased flexibility in the operation of the electric grid. Storage, broadly speaking, provides the flexibility of shifting energy over time; network, on the other hand, provides the flexibility of shifting energy over geographical locations. The optimal control of storage networks in stochastic environments is an important open problem. The key challenge is that, even in small networks, the corresponding constrained stochastic control problems on continuous spaces suffer from curses of dimensionality, and are intractable in general settings. For large networks, no efficient algorithm is known to give optimal or provably near-optimal performance for this problem. This paper provides an efficient algorithm to solve this problem with performance guarantees. We study the operation of storage networks, i.e., a storage system interconnected via a power network. An online algorithm, termed Online Modified Greedy algorithm, is developed for the corresponding constrained stochastic control problem. A sub-optimality bound for the algorithm is derived, and a semidefinite program is constructed to minimize the bound. In many cases, the bound approaches zero so that the algorithm is near-optimal. A task-based distributed implementation of the online algorithm relying only on local information and neighbor communication is then developed based on the alternating direction method of multipliers. Numerical examples verify the established theoretical performance bounds, and demonstrate the scalability of the algorithm.Comment: arXiv admin note: text overlap with arXiv:1405.778
    • …
    corecore