17,739 research outputs found
Control of a lane-drop bottleneck through variable speed limits
In this study, we formulate the VSL control problem for the traffic system in
a zone upstream to a lane-drop bottleneck based on two traffic flow models: the
Lighthill-Whitham-Richards (LWR) model, which is an infinite-dimensional
partial differential equation, and the link queue model, which is a
finite-dimensional ordinary differential equation. In both models, the
discharging flow-rate is determined by a recently developed model of capacity
drop, and the upstream in-flux is regulated by the speed limit in the VSL zone.
Since the link queue model approximates the LWR model and is much simpler, we
first analyze the control problem and develop effective VSL strategies based on
the former. First for an open-loop control system with a constant speed limit,
we prove that a constant speed limit can introduce an uncongested equilibrium
state, in addition to a congested one with capacity drop, but the congested
equilibrium state is always exponentially stable. Then we apply a feedback
proportional-integral (PI) controller to form a closed-loop control system, in
which the congested equilibrium state and, therefore, capacity drop can be
removed by the I-controller. Both analytical and numerical results show that,
with appropriately chosen controller parameters, the closed-loop control system
is stable, effect, and robust. Finally, we show that the VSL strategies based
on I- and PI-controllers are also stable, effective, and robust for the LWR
model. Since the properties of the control system are transferable between the
two models, we establish a dual approach for studying the control problems of
nonlinear traffic flow systems. We also confirm that the VSL strategy is
effective only if capacity drop occurs. The obtained method and insights can be
useful for future studies on other traffic control methods and implementations
of VSL strategies.Comment: 31 pages, 14 figure
Robust measurement-based buffer overflow probability estimators for QoS provisioning and traffic anomaly prediction applications
Suitable estimators for a class of Large Deviation approximations of rare event probabilities based on sample realizations of random processes have been proposed in our earlier work. These estimators are expressed as non-linear multi-dimensional optimization problems of a special structure. In this paper, we develop an algorithm to solve these optimization problems very efficiently based on their characteristic structure. After discussing the nature of the objective function and constraint set and their peculiarities, we provide a formal proof that the developed algorithm is guaranteed to always converge. The existence of efficient and provably convergent algorithms for solving these problems is a prerequisite for using the proposed estimators in real time problems such as call admission control, adaptive modulation and coding with QoS constraints, and traffic anomaly detection in high data rate communication networks
Robust measurement-based buffer overflow probability estimators for QoS provisioning and traffic anomaly prediction applicationm
Suitable estimators for a class of Large Deviation approximations of rare
event probabilities based on sample realizations of random processes have been
proposed in our earlier work. These estimators are expressed as non-linear
multi-dimensional optimization problems of a special structure. In this paper,
we develop an algorithm to solve these optimization problems very efficiently
based on their characteristic structure. After discussing the nature of the
objective function and constraint set and their peculiarities, we provide a
formal proof that the developed algorithm is guaranteed to always converge. The
existence of efficient and provably convergent algorithms for solving these
problems is a prerequisite for using the proposed estimators in real time
problems such as call admission control, adaptive modulation and coding with
QoS constraints, and traffic anomaly detection in high data rate communication
networks
A Survey on Delay-Aware Resource Control for Wireless Systems --- Large Deviation Theory, Stochastic Lyapunov Drift and Distributed Stochastic Learning
In this tutorial paper, a comprehensive survey is given on several major
systematic approaches in dealing with delay-aware control problems, namely the
equivalent rate constraint approach, the Lyapunov stability drift approach and
the approximate Markov Decision Process (MDP) approach using stochastic
learning. These approaches essentially embrace most of the existing literature
regarding delay-aware resource control in wireless systems. They have their
relative pros and cons in terms of performance, complexity and implementation
issues. For each of the approaches, the problem setup, the general solution and
the design methodology are discussed. Applications of these approaches to
delay-aware resource allocation are illustrated with examples in single-hop
wireless networks. Furthermore, recent results regarding delay-aware multi-hop
routing designs in general multi-hop networks are elaborated. Finally, the
delay performance of the various approaches are compared through simulations
using an example of the uplink OFDMA systems.Comment: 58 pages, 8 figures; IEEE Transactions on Information Theory, 201
Delay-Optimal User Scheduling and Inter-Cell Interference Management in Cellular Network via Distributive Stochastic Learning
In this paper, we propose a distributive queueaware intra-cell user
scheduling and inter-cell interference (ICI) management control design for a
delay-optimal celluar downlink system with M base stations (BSs), and K users
in each cell. Each BS has K downlink queues for K users respectively with
heterogeneous arrivals and delay requirements. The ICI management control is
adaptive to joint queue state information (QSI) over a slow time scale, while
the user scheduling control is adaptive to both the joint QSI and the joint
channel state information (CSI) over a faster time scale. We show that the
problem can be modeled as an infinite horizon average cost Partially Observed
Markov Decision Problem (POMDP), which is NP-hard in general. By exploiting the
special structure of the problem, we shall derive an equivalent Bellman
equation to solve the POMDP problem. To address the distributive requirement
and the issue of dimensionality and computation complexity, we derive a
distributive online stochastic learning algorithm, which only requires local
QSI and local CSI at each of the M BSs. We show that the proposed learning
algorithm converges almost surely (with probability 1) and has significant gain
compared with various baselines. The proposed solution only has linear
complexity order O(MK)
- …