71,490 research outputs found
A Survey on Delay-Aware Resource Control for Wireless Systems --- Large Deviation Theory, Stochastic Lyapunov Drift and Distributed Stochastic Learning
In this tutorial paper, a comprehensive survey is given on several major
systematic approaches in dealing with delay-aware control problems, namely the
equivalent rate constraint approach, the Lyapunov stability drift approach and
the approximate Markov Decision Process (MDP) approach using stochastic
learning. These approaches essentially embrace most of the existing literature
regarding delay-aware resource control in wireless systems. They have their
relative pros and cons in terms of performance, complexity and implementation
issues. For each of the approaches, the problem setup, the general solution and
the design methodology are discussed. Applications of these approaches to
delay-aware resource allocation are illustrated with examples in single-hop
wireless networks. Furthermore, recent results regarding delay-aware multi-hop
routing designs in general multi-hop networks are elaborated. Finally, the
delay performance of the various approaches are compared through simulations
using an example of the uplink OFDMA systems.Comment: 58 pages, 8 figures; IEEE Transactions on Information Theory, 201
Distributed Model Predictive Control Using a Chain of Tubes
A new distributed MPC algorithm for the regulation of dynamically coupled
subsystems is presented in this paper. The current control action is computed
via two robust controllers working in a nested fashion. The inner controller
builds a nominal reference trajectory from a decentralized perspective. The
outer controller uses this information to take into account the effects of the
coupling and generate a distributed control action. The tube-based approach to
robustness is employed. A supplementary constraint is included in the outer
optimization problem to provide recursive feasibility of the overall controllerComment: Accepted for presentation at the UKACC CONTROL 2016 conference
(Belfast, UK
Distributive Network Utility Maximization (NUM) over Time-Varying Fading Channels
Distributed network utility maximization (NUM) has received an increasing
intensity of interest over the past few years. Distributed solutions (e.g., the
primal-dual gradient method) have been intensively investigated under fading
channels. As such distributed solutions involve iterative updating and explicit
message passing, it is unrealistic to assume that the wireless channel remains
unchanged during the iterations. Unfortunately, the behavior of those
distributed solutions under time-varying channels is in general unknown. In
this paper, we shall investigate the convergence behavior and tracking errors
of the iterative primal-dual scaled gradient algorithm (PDSGA) with dynamic
scaling matrices (DSC) for solving distributive NUM problems under time-varying
fading channels. We shall also study a specific application example, namely the
multi-commodity flow control and multi-carrier power allocation problem in
multi-hop ad hoc networks. Our analysis shows that the PDSGA converges to a
limit region rather than a single point under the finite state Markov chain
(FSMC) fading channels. We also show that the order of growth of the tracking
errors is given by O(T/N), where T and N are the update interval and the
average sojourn time of the FSMC, respectively. Based on this analysis, we
derive a low complexity distributive adaptation algorithm for determining the
adaptive scaling matrices, which can be implemented distributively at each
transmitter. The numerical results show the superior performance of the
proposed dynamic scaling matrix algorithm over several baseline schemes, such
as the regular primal-dual gradient algorithm
On feasibility, stability and performance in distributed model predictive control
In distributed model predictive control (DMPC), where a centralized
optimization problem is solved in distributed fashion using dual decomposition,
it is important to keep the number of iterations in the solution algorithm,
i.e. the amount of communication between subsystems, as small as possible. At
the same time, the number of iterations must be enough to give a feasible
solution to the optimization problem and to guarantee stability of the closed
loop system. In this paper, a stopping condition to the distributed
optimization algorithm that guarantees these properties, is presented. The
stopping condition is based on two theoretical contributions. First, since the
optimization problem is solved using dual decomposition, standard techniques to
prove stability in model predictive control (MPC), i.e. with a terminal cost
and a terminal constraint set that involve all state variables, do not apply.
For the case without a terminal cost or a terminal constraint set, we present a
new method to quantify the control horizon needed to ensure stability and a
prespecified performance. Second, the stopping condition is based on a novel
adaptive constraint tightening approach. Using this adaptive constraint
tightening approach, we guarantee that a primal feasible solution to the
optimization problem is found and that closed loop stability and performance is
obtained. Numerical examples show that the number of iterations needed to
guarantee feasibility of the optimization problem, stability and a prespecified
performance of the closed-loop system can be reduced significantly using the
proposed stopping condition
- …