124 research outputs found

    Max-min Fairness in 802.11 Mesh Networks

    Get PDF
    In this paper we build upon the recent observation that the 802.11 rate region is log-convex and, for the first time, characterise max-min fair rate allocations for a large class of 802.11 wireless mesh networks. By exploiting features of the 802.11e/n MAC, in particular TXOP packet bursting, we are able to use this characterisation to establish a straightforward, practically implementable approach for achieving max-min throughput fairness. We demonstrate that this approach can be readily extended to encompass time-based fairness in multi-rate 802.11 mesh networks

    Fault-Aware Resource Allocation for Heterogeneous Data Sources with Multipath Routing

    Get PDF

    Non-convex resource allocation in communication networks

    Get PDF
    The continuously growing number of applications competing for resources in current communication networks highlights the necessity for efficient resource allocation mechanisms to maximize user satisfaction. Optimization Theory can provide the necessary tools to develop such mechanisms that will allocate network resources optimally and fairly among users. However, the resource allocation problem in current networks has characteristics that turn the respective optimization problem into a non-convex one. First, current networks very often consist of a number of wireless links, whose capacity is not constant but follows Shannon capacity formula, which is a non-convex function. Second, the majority of the traffic in current networks is generated by multimedia applications, which are non-concave functions of rate. Third, current resource allocation methods follow the (bandwidth) proportional fairness policy, which when applied to networks shared by both concave and non-concave utilities leads to unfair resource allocations. These characteristics make current convex optimization frameworks inefficient in several aspects. This work aims to develop a non-convex optimization framework that will be able to allocate resources efficiently for non-convex resource allocation formulations. Towards this goal, a necessary and sufficient condition for the convergence of any primal-dual optimization algorithm to the optimal solution is proven. The wide applicability of this condition makes this a fundamental contribution for Optimization Theory in general. A number of optimization formulations are proposed, cases where this condition is not met are analysed and efficient alternative heuristics are provided to handle these cases. Furthermore, a novel multi-sigmoidal utility shape is proposed to model user satisfaction for multi-tiered multimedia applications more accurately. The advantages of such non-convex utilities and their effect in the optimization process are thoroughly examined. Alternative allocation policies are also investigated with respect to their ability to allocate resources fairly and deal with the non-convexity of the resource allocation problem. Specifically, the advantages of using Utility Proportional Fairness as an allocation policy are examined with respect to the development of distributed algorithms, their convergence to the optimal solution and their ability to adapt to the Quality of Service requirements of each application

    In Pursuit of Desirable Equilibria in Large Scale Networked Systems

    Get PDF
    This thesis addresses an interdisciplinary problem in the context of engineering, computer science and economics: In a large scale networked system, how can we achieve a desirable equilibrium that benefits the system as a whole? We approach this question from two perspectives. On the one hand, given a system architecture that imposes certain constraints, a system designer must propose efficient algorithms to optimally allocate resources to the agents that desire them. On the other hand, given algorithms that are used in practice, a performance analyst must come up with tools that can characterize these algorithms and determine when they can be optimally applied. Ideally, the two viewpoints must be integrated to obtain a simple system design with efficient algorithms that apply to it. We study the design of incentives and algorithms in such large scale networked systems under three application settings, referred to herein via the subheadings: Incentivizing Sharing in Realtime D2D Networks: A Mean Field Games Perspective, Energy Coupon: A Mean Field Game Perspective on Demand Response in Smart Grids, Dynamic Adaptability Properties of Caching Algorithms, and Accuracy vs. Learning Rate of Multi-level Caching Algorithms. Our application scenarios all entail an asymptotic system scaling, and an equilibrium is defined in terms of a probability distribution over system states. The question in each case is to determine how to attain a probability distribution that possesses certain desirable properties. For the first two applications, we consider the design of specific mechanisms to steer the system toward a desirable equilibrium under self interested decision making. The environments in these problems are such that there is a set of shared resources, and a mechanism is used during each time step to allocate resources to agents that are selfish and interact via a repeated game. These models are motivated by resource sharing systems in the context of data communication, transportation, and power transmission networks. The objective is to ensure that the achieved equilibria are socially desirable. Formally, we show that a Mean Field Game can be used to accurately approximate these repeated game frameworks, and we describe mechanisms under which socially desirable Mean Field Equilibria exist. For the third application, we focus on performance analysis via new metrics to determine the value of the attained equilibrium distribution of cache contents when using different replacement algorithms in cache networks. The work is motivated by the fact that typical performance analysis of caching algorithms consists of determining hit probability under a fixed arrival process of requests, which does not account for dynamic variability of request arrivals. Our main contribution is to define a function which accounts for both the error due to time lag of learning the items' popularity, as well as error due to the inaccuracy of learning, and to characterize the tradeoff between the two that conventional algorithms achieve. We then use the insights gained in this exercise to design new algorithms that are demonstrably superior

    Cognitive Communications in White Space: Opportunistic Scheduling, Spectrum Shaping and Delay Analysis

    Get PDF
    abstract: A unique feature, yet a challenge, in cognitive radio (CR) networks is the user hierarchy: secondary users (SU) wishing for data transmission must defer in the presence of active primary users (PUs), whose priority to channel access is strictly higher.Under a common thread of characterizing and improving Quality of Service (QoS) for the SUs, this dissertation is progressively organized under two main thrusts: the first thrust focuses on SU's throughput by exploiting the underlying properties of the PU spectrum to perform effective scheduling algorithms; and the second thrust aims at another important QoS performance of the SUs, namely delay, subject to the impact of PUs' activities, and proposes enhancement and control mechanisms. More specifically, in the first thrust, opportunistic spectrum scheduling for SU is first considered by jointly exploiting the memory in PU's occupancy and channel fading. In particular, the underexplored scenario where PU occupancy presents a {long} temporal memory is taken into consideration. By casting the problem as a partially observable Markov decision process, a set of {multi-tier} tradeoffs are quantified and illustrated. Next, a spectrum shaping framework is proposed by leveraging network coding as a {spectrum shaper} on the PU's traffic. Such shaping effect brings in predictability of the primary spectrum, which is utilized by the SUs to carry out adaptive channel sensing by prioritizing channel access order, and hence significantly improve their throughput. On the other hand, such predictability can make wireless channels more susceptible to jamming attacks. As a result, caution must be taken in designing wireless systems to balance the throughput and the jamming-resistant capability. The second thrust turns attention to an equally important performance metric, i.e., delay performance. Specifically, queueing delay analysis is conducted for SUs employing random access over the PU channels. Fluid approximation is taken and Poisson driven stochastic differential equations are applied to characterize the moments of the SUs' steady-state queueing delay. Then, dynamic packet generation control mechanisms are developed to meet the given delay requirements for SUs.Dissertation/ThesisPh.D. Electrical Engineering 201

    Layering as Optimization Decomposition: A Mathematical Theory of Network Architectures

    Full text link

    Design of Network Coding Schemes and RF Energy Transfer in Wireless Communication Networks

    Get PDF
    This thesis focuses on the design of network coding schemes and radio frequency (RF) energy transfer in wireless communication networks. During the past few years, network coding has attracted significant attention because of its capability to transmit maximum possible information in a network from multiple sources to multiple destinations via a relay. Normally, the destinations are only able to decode the information with sufficient prior knowledge. To enable the destinations to decode the information in the cases with less/no prior knowledge, a pattern of nested codes with multiple interpretations using binary convolutional codes is constructed in a multi-source multi-destination wireless relay network. Then, I reconstruct nested codes with convolutional codes and lattice codes in multi-way relay channels to improve the spectrum efficiency. Moreover, to reduce the high decoding complexity caused by the adopted convolutional codes, a network coded non-binary low-density generator matrix (LDGM) code structure is proposed for a multi-access relay system. Another focus of this thesis is on the design of RF-enabled wireless energy transfer (WET) schemes. Much attention has been attracted by RF-enabled WET technology because of its capability enabling wireless devices to harvest energy from wireless signals for their intended applications. I first configure a power beacon (PB)-assisted wireless-powered communication network (PB-WPCN), which consists of a set of hybrid access point (AP)-source pairs and a PB. Both cooperative and non-cooperative scenarios are considered, based on whether the PB is cooperative with the APs or not. Besides, I develop a new distributed power control scheme for a power splitting-based interference channel (IFC) with simultaneous wireless information and power transfer (SWIPT), where the considered IFC consists of multiple source-destination pairs

    Efficient decentralized communications in sensor networks

    Get PDF
    This thesis is concerned with problems in decentralized communication in large networks. Namely, we address the problems of joint rate allocation and transmission of data sources measured at nodes, and of controlling the multiple access of sources to a shared medium. In our study, we consider in particular the important case of a sensor network measuring correlated data. In the first part of this thesis, we consider the problem of correlated data gathering by a network with a sink node and a tree communication structure, where the goal is to minimize the total transmission cost of transporting the information collected by the nodes, to the sink node. Two coding strategies are analyzed: a Slepian-Wolf model where optimal coding is complex and transmission optimization is simple, and a joint entropy coding model with explicit communication where coding is simple and transmission optimization is difficult. This problem requires a joint optimization of the rate allocation at the nodes and of the transmission structure. For the Slepian-Wolf setting, we derive a closed form solution and an efficient distributed approximation algorithm with a good performance. We generalize our results to the case of multiple sinks. For the explicit communication case, we prove that building an optimal data gathering tree is NP-complete and we propose various distributed approximation algorithms. We compare asymptotically, for dense networks, the total costs associated with Slepian-Wolf coding and explicit communication, by finding their corresponding scaling laws and analyzing the ratio of their respective costs. We argue that, for large networks and under certain conditions on the correlation structure, "intelligent", but more complex Slepian-Wolf coding provides unbounded gains over the widely used straightforward approach of opportunistic aggregation and compression by explicit communication. In the second part of this thesis, we consider a queuing problem in which the service rate of a queue is a function of a partially observed Markov chain, and in which the arrivals are controlled based on those partial observations so as to keep the system in a desirable mildly unstable regime. The optimal controller for this problem satisfies a separation property: we first compute a probability measure on the state space of the chain, namely the information state, then use this measure as the new state based on which to make control decisions. We give a formal description of the system considered and of its dynamics, we formalize and solve an optimal control problem, and we show numerical simulations to illustrate with concrete examples properties of the optimal control law. We show how the ergodic behavior of our queuing model is characterized by an invariant measure over all possible information states, and we construct that measure. Our results may be applied for designing efficient and stable algorithms for medium access control in multiple accessed systems, in particular for sensor networks

    In Pursuit of Desirable Equilibria in Large Scale Networked Systems

    Get PDF
    This thesis addresses an interdisciplinary problem in the context of engineering, computer science and economics: In a large scale networked system, how can we achieve a desirable equilibrium that benefits the system as a whole? We approach this question from two perspectives. On the one hand, given a system architecture that imposes certain constraints, a system designer must propose efficient algorithms to optimally allocate resources to the agents that desire them. On the other hand, given algorithms that are used in practice, a performance analyst must come up with tools that can characterize these algorithms and determine when they can be optimally applied. Ideally, the two viewpoints must be integrated to obtain a simple system design with efficient algorithms that apply to it. We study the design of incentives and algorithms in such large scale networked systems under three application settings, referred to herein via the subheadings: Incentivizing Sharing in Realtime D2D Networks: A Mean Field Games Perspective, Energy Coupon: A Mean Field Game Perspective on Demand Response in Smart Grids, Dynamic Adaptability Properties of Caching Algorithms, and Accuracy vs. Learning Rate of Multi-level Caching Algorithms. Our application scenarios all entail an asymptotic system scaling, and an equilibrium is defined in terms of a probability distribution over system states. The question in each case is to determine how to attain a probability distribution that possesses certain desirable properties. For the first two applications, we consider the design of specific mechanisms to steer the system toward a desirable equilibrium under self interested decision making. The environments in these problems are such that there is a set of shared resources, and a mechanism is used during each time step to allocate resources to agents that are selfish and interact via a repeated game. These models are motivated by resource sharing systems in the context of data communication, transportation, and power transmission networks. The objective is to ensure that the achieved equilibria are socially desirable. Formally, we show that a Mean Field Game can be used to accurately approximate these repeated game frameworks, and we describe mechanisms under which socially desirable Mean Field Equilibria exist. For the third application, we focus on performance analysis via new metrics to determine the value of the attained equilibrium distribution of cache contents when using different replacement algorithms in cache networks. The work is motivated by the fact that typical performance analysis of caching algorithms consists of determining hit probability under a fixed arrival process of requests, which does not account for dynamic variability of request arrivals. Our main contribution is to define a function which accounts for both the error due to time lag of learning the items' popularity, as well as error due to the inaccuracy of learning, and to characterize the tradeoff between the two that conventional algorithms achieve. We then use the insights gained in this exercise to design new algorithms that are demonstrably superior

    Using hypergraph theory to model coexistence management and coordinated spectrum allocation for heterogeneous wireless networks operating in shared spectrum

    Get PDF
    Electromagnetic waves in the Radio Frequency (RF) spectrum are used to convey wireless transmissions from one radio antenna to another. Spectrum utilisation factor, which refers to how readily a given spectrum can be reused across space and time while maintaining an acceptable level of transmission errors, is used to measure how efficiently a unit of frequency spectrum can be allocated to a specified number of users. The demand for wireless applications is increasing exponentially, hence there is a need for efficient management of the RF spectrum. However, spectrum usage studies have shown that the spectrum is under-utilised in space and time. A regulatory shift from static spectrum assignment to DSA is one way of addressing this. Licence exemption policy has also been advanced in Dynamic Spectrum Access (DSA) systems to spur wireless innovation and universal access to the internet. Furthermore, there is a shift from homogeneous to heterogeneous radio access and usage of the same spectrum band. These three shifts from traditional spectrum management have led to the challenge of coexistence among heterogeneous wireless networks which access the spectrum using DSA techniques. Cognitive radios have the ability for spectrum agility based on spectrum conditions. However, in the presence of multiple heterogeneous networks and without spectrum coordination, there is a challenge related to switching between available channels to minimise interference and maximise spectrum allocation. This thesis therefore focuses on the design of a framework for coexistence management and spectrum coordination, with the objective of maximising spectrum utilisation across geographical space and across time. The amount of geographical coverage in which a frequency can be used is optimised through frequency reuse while ensuring that harmful interference is minimised. The time during which spectrum is occupied is increased through time-sharing of the same spectrum by two or more networks, while ensuring that spectrum is shared by networks that can coexist in the same spectrum and that the total channel load is not excessive to prevent spectrum starvation. Conventionally, a graph is used to model relationships between entities such as interference relationships among networks. However, the concept of an edge in a graph is not sufficient to model relationships that involve more than two entities, such as more than two networks that are able to share the same channel in the time domain, because an edge can only connect two entities. On the other hand, a hypergraph is a generalisation of an undirected graph in which a hyperedge can connect more than two entities. Therefore, this thesis investigates the use of hypergraph theory to model the RF environment and the spectrum allocation scheme. The hypergraph model was applied to an algorithm for spectrum sharing among 100 heterogeneous wireless networks, whose geo-locations were randomly and independently generated in a 50 km by 50 km area. Simulation results for spectrum utilisation performance have shown that the hypergraph-based model allocated channels, on average, to 8% more networks than the graph-based model. The results also show that, for the same RF environment, the hypergraph model requires up to 36% fewer channels to achieve, on average, 100% operational networks, than the graph model. The rate of growth of the running time of the hypergraph-based algorithm with respect to the input size is equal to the square of the input size, like the graph-based algorithm. Thus, the model achieved better performance at no additional time complexity.Electromagnetic waves in the Radio Frequency (RF) spectrum are used to convey wireless transmissions from one radio antenna to another. Spectrum utilisation factor, which refers to how readily a given spectrum can be reused across space and time while maintaining an acceptable level of transmission errors, is used to measure how efficiently a unit of frequency spectrum can be allocated to a specified number of users. The demand for wireless applications is increasing exponentially, hence there is a need for efficient management of the RF spectrum. However, spectrum usage studies have shown that the spectrum is under-utilised in space and time. A regulatory shift from static spectrum assignment to DSA is one way of addressing this. Licence exemption policy has also been advanced in Dynamic Spectrum Access (DSA) systems to spur wireless innovation and universal access to the internet. Furthermore, there is a shift from homogeneous to heterogeneous radio access and usage of the same spectrum band. These three shifts from traditional spectrum management have led to the challenge of coexistence among heterogeneous wireless networks which access the spectrum using DSA techniques. Cognitive radios have the ability for spectrum agility based on spectrum conditions. However, in the presence of multiple heterogeneous networks and without spectrum coordination, there is a challenge related to switching between available channels to minimise interference and maximise spectrum allocation. This thesis therefore focuses on the design of a framework for coexistence management and spectrum coordination, with the objective of maximising spectrum utilisation across geographical space and across time. The amount of geographical coverage in which a frequency can be used is optimised through frequency reuse while ensuring that harmful interference is minimised. The time during which spectrum is occupied is increased through time-sharing of the same spectrum by two or more networks, while ensuring that spectrum is shared by networks that can coexist in the same spectrum and that the total channel load is not excessive to prevent spectrum starvation. Conventionally, a graph is used to model relationships between entities such as interference relationships among networks. However, the concept of an edge in a graph is not sufficient to model relationships that involve more than two entities, such as more than two networks that are able to share the same channel in the time domain, because an edge can only connect two entities. On the other hand, a hypergraph is a generalisation of an undirected graph in which a hyperedge can connect more than two entities. Therefore, this thesis investigates the use of hypergraph theory to model the RF environment and the spectrum allocation scheme. The hypergraph model was applied to an algorithm for spectrum sharing among 100 heterogeneous wireless networks, whose geo-locations were randomly and independently generated in a 50 km by 50 km area. Simulation results for spectrum utilisation performance have shown that the hypergraph-based model allocated channels, on average, to 8% more networks than the graph-based model. The results also show that, for the same RF environment, the hypergraph model requires up to 36% fewer channels to achieve, on average, 100% operational networks, than the graph model. The rate of growth of the running time of the hypergraph-based algorithm with respect to the input size is equal to the square of the input size, like the graph-based algorithm. Thus, the model achieved better performance at no additional time complexity
    • …
    corecore