76 research outputs found
COOPERATIVE NETWORKING AND RELATED ISSUES: STABILITY, ENERGY HARVESTING, AND NEIGHBOR DISCOVERY
This dissertation deals with various newly emerging topics in the context of cooperative networking. The first part is about the cognitive radio. To guarantee the performance of high priority users, it is important to know the activity of the high priority communication system but the knowledge is usually imperfect due to randomness in the observed signal. In such a context, the stability property of cognitive radio systems in the presence of sensing errors is studied. General guidelines on controlling the operating point of the sensing device over its receiver operating characteristics are also given. We then consider the hybrid of different modes of operation for cognitive radio systems with time-varying connectivity. The random connectivity gives additional chances that can be utilized by the low priority communication system.
The second part of this dissertation is about the random access. We are specifically interested in the scenario when the nodes are harvesting energy from the environment. For such a system, we accurately assess the effect of limited, but renewable, energy availability on the stability region. The effect of finite capacity batteries is also studied. We next consider the exploitation of diversity amongst users under random access framework. That is, each user adapts its transmission probability based on the local channel state information in a decentralized manner. The impact of imperfect channel state information on the stability region is investigated. Furthermore, it is compared to the class of stationary scheduling policies that make centralized decisions based on the channel state feedback.
The backpressure policy for cross-layer control of wireless multi-hop networks is known to be throughput-optimal for i.i.d. arrivals. The third part of this dissertation is about the backpressure-based control for networks with time-correlated arrivals that may exhibit long-range dependency. It is shown that the original backpressure policy is still throughput-optimal but with increased average network delay. The case when the arrival rate vector is possibly outside the stability region is also studied by augmenting the backpressure policy with the flow control mechanism.
Lastly, the problem of neighbor discovery in a wireless sensor network is dealt. We first introduce the realistic effect of physical layer considerations in the evaluation of the performance of logical discovery algorithms by incorporating physical layer parameters. Secondly, given the lack of knowledge of the number of neighbors along with the lack of knowledge of the individual signal parameters, we adopt the viewpoint of random set theory to the problem of detecting the transmitting neighbors. Random set theory is a generalization of standard probability theory by assigning sets, rather than values, to random outcomes and it has been applied to multi-user detection problem when the set of transmitters are unknown and dynamically changing
Learning-aided Stochastic Network Optimization with Imperfect State Prediction
We investigate the problem of stochastic network optimization in the presence
of imperfect state prediction and non-stationarity. Based on a novel
distribution-accuracy curve prediction model, we develop the predictive
learning-aided control (PLC) algorithm, which jointly utilizes historic and
predicted network state information for decision making. PLC is an online
algorithm that requires zero a-prior system statistical information, and
consists of three key components, namely sequential distribution estimation and
change detection, dual learning, and online queue-based control.
Specifically, we show that PLC simultaneously achieves good long-term
performance, short-term queue size reduction, accurate change detection, and
fast algorithm convergence. In particular, for stationary networks, PLC
achieves a near-optimal , utility-delay
tradeoff. For non-stationary networks, \plc{} obtains an
utility-backlog tradeoff for distributions that last
time, where
is the prediction accuracy and is a constant (the
Backpressue algorithm \cite{neelynowbook} requires an length
for the same utility performance with a larger backlog). Moreover, PLC detects
distribution change slots faster with high probability ( is the
prediction size) and achieves an convergence time. Our results demonstrate
that state prediction (even imperfect) can help (i) achieve faster detection
and convergence, and (ii) obtain better utility-delay tradeoffs
Recommended from our members
Resource allocation in large-scale multi-server systems
textThe focus of this dissertation is the task of resource allocation in multi- server systems arising from two applications â multi-channel wireless com- munication networks and large-scale content delivery networks. The unifying theme behind all the problems studied in this dissertation is the large-scale nature of the underlying networks, which necessitate the design of algorithms which are simple/greedy and therefore scalable, and yet, have good perfor- mance guarantees. For the multi-channel multi-hop wireless communication networks we consider, the goal is to design scalable routing and scheduling policies which stabilize the system and perform well from a queue-length and end-to-end delay perspective. We first focus on relay assisted downlink networks where it is well understood that the BackPressure algorithm is stabilizing, but, its delay performance can be poor. We propose an alternative algorithm - an iterative MaxWeight algorithm and show that it stabilizes the system and outperforms the BackPressure algorithm. Next, we focus on wireless networks which serve mobile users via a wide-area base-station and multiple densely deployed short- range access nodes (e.g., small cells). We show that traditional algorithms that forward each packet at most once, either to a single access node or a mobile user, do not have good delay performance and propose an algorithm (a distributed scheduler - DIST) and show that it can stabilize the system and performs well from a queue-length/delay perspective. In content delivery networks, each arriving job can only be served by servers storing the requested content piece. Motivated by this, we consider two settings. In the first setting, each job, on arrival, reveals a deadline and a subset of servers that can serve it and the goal is to maximize the fraction of jobs that are served before their deadlines. We propose an online load balanc- ing algorithm which uses correlated randomness and prove its optimality. In the second setting, we study content placement in a content delivery network where a large number of servers, serve a correspondingly large volume of con- tent requests arriving according to an unknown stochastic process. The main takeaway from our results for this setting is that separating the estimation of demands and the subsequent use of the estimations to design optimal content placement policies (learn-and-optimize approach) is suboptimal. In addition, we study two simple adaptive content replication policies and show that they outperform all learning-based static storage policies.Electrical and Computer Engineerin
Quantifying the Cost of Learning in Queueing Systems
Queueing systems are widely applicable stochastic models with use cases in
communication networks, healthcare, service systems, etc. Although their
optimal control has been extensively studied, most existing approaches assume
perfect knowledge of system parameters. Of course, this assumption rarely holds
in practice where there is parameter uncertainty, thus motivating a recent line
of work on bandit learning for queueing systems. This nascent stream of
research focuses on the asymptotic performance of the proposed algorithms.
In this paper, we argue that an asymptotic metric, which focuses on
late-stage performance, is insufficient to capture the intrinsic statistical
complexity of learning in queueing systems which typically occurs in the early
stage. Instead, we propose the Cost of Learning in Queueing (CLQ), a new metric
that quantifies the maximum increase in time-averaged queue length caused by
parameter uncertainty. We characterize the CLQ of a single-queue multi-server
system, and then extend these results to multi-queue multi-server systems and
networks of queues. In establishing our results, we propose a unified analysis
framework for CLQ that bridges Lyapunov and bandit analysis, which could be of
independent interest
Self-Similarity in a multi-stage queueing ATM switch fabric
Recent studies of digital network traffic have shown that arrival processes in such an environment are more accurately modeled as a statistically self-similar process, rather than as a Poisson-based one. We present a simulation of a combination sharedoutput queueing ATM switch fabric, sourced by two models of self-similar input. The effect of self-similarity on the average queue length and cell loss probability for this multi-stage queue is examined for varying load, buffer size, and internal speedup. The results using two self-similar input models, Pareto-distributed interarrival times and a Poisson-Zeta ON-OFF model, are compared with each other and with results using Poisson interarrival times and an ON-OFF bursty traffic source with Ge ometrically distributed burst lengths. The results show that at a high utilization and at a high degree of self-similarity, switch performance improves slowly with increasing buffer size and speedup, as compared to the improvement using Poisson-based traffic
Scheduling algorithms for throughput maximization in data networks
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 215-226).This thesis considers the performance implications of throughput optimal scheduling in physically and computationally constrained data networks. We study optical networks, packet switches, and wireless networks, each of which has an assortment of features and constraints that challenge the design decisions of network architects. In this work, each of these network settings are subsumed under a canonical model and scheduling framework. Tools of queueing analysis are used to evaluate network throughput properties, and demonstrate throughput optimality of scheduling and routing algorithms under stochastic traffic. Techniques of graph theory are used to study network topologies having desirable throughput properties. Combinatorial algorithms are proposed for efficient resource allocation. In the optical network setting, the key enabling technology is wavelength division multiplexing (WDM), which allows each optical fiber link to simultaneously carry a large number of independent data streams at high rate. To take advantage of this high data processing potential, engineers and physicists have developed numerous technologies, including wavelength converters, optical switches, and tunable transceivers.(cont.) While the functionality provided by these devices is of great importance in capitalizing upon the WDM resources, a major challenge exists in determining how to configure these devices to operate efficiently under time-varying data traffic. In the WDM setting, we make two main contributions. First, we develop throughput optimal joint WDM reconfiguration and electronic-layer routing algorithms, based on maxweight scheduling. To mitigate the service disruption associated with WDM reconfiguration, our algorithms make decisions at frame intervals. Second, we develop analytic tools to quantify the maximum throughput achievable in general network settings. Our approach is to characterize several geometric features of the maximum region of arrival rates that can be supported in the network. In the packet switch setting, we observe through numerical simulation the attractive throughput properties of a simple maximal weight scheduler. Subsequently, we consider small switches, and analytically demonstrate the attractive throughput properties achievable using maximal weight scheduling. We demonstrate that such throughput properties may not be sustained in larger switches.(cont.) In the wireless network setting, mesh networking is a promising technology for achieving connectivity in local and metropolitan area networks. Wireless access points and base stations adhering to the IEEE 802.11 wireless networking standard can be bought off the shelf at little cost, and can be configured to access the Internet in minutes. With ubiquitous low-cost Internet access perceived to be of tremendous societal value, such technology is naturally garnering strong interest. Enabling such wireless technology is thus of great importance. An important challenge in enabling mesh networks, and many other wireless network applications, results from the fact that wireless transmission is achieved by broadcasting signals through the air, which has the potential for interfering with other parts of the network. Furthermore, the scarcity of wireless transmission resources implies that link activation and packet routing should be effected using simple distributed algorithms. We make three main contributions in the wireless setting. First, we determine graph classes under which simple, distributed, maximal weight schedulers achieve throughput optimality.(cont.) Second, we use this acquired knowledge of graph classes to develop combinatorial algorithms, based on matroids, for allocating channels to wireless links, such that each channel can achieve maximum throughput using simple distributed schedulers. Third, we determine new conditions under which distributed algorithms for joint link activation and routing achieve throughput optimality.by Andrew Brzezinski.Ph.D
EUROPEAN CONFERENCE ON QUEUEING THEORY 2016
International audienceThis booklet contains the proceedings of the second European Conference in Queueing Theory (ECQT) that was held from the 18th to the 20th of July 2016 at the engineering school ENSEEIHT, Toulouse, France. ECQT is a biannual event where scientists and technicians in queueing theory and related areas get together to promote research, encourage interaction and exchange ideas. The spirit of the conference is to be a queueing event organized from within Europe, but open to participants from all over the world. The technical program of the 2016 edition consisted of 112 presentations organized in 29 sessions covering all trends in queueing theory, including the development of the theory, methodology advances, computational aspects and applications. Another exciting feature of ECQT2016 was the institution of the TakĂĄcs Award for outstanding PhD thesis on "Queueing Theory and its Applications"
- âŠ