4,004 research outputs found

    Applications of Repeated Games in Wireless Networks: A Survey

    Full text link
    A repeated game is an effective tool to model interactions and conflicts for players aiming to achieve their objectives in a long-term basis. Contrary to static noncooperative games that model an interaction among players in only one period, in repeated games, interactions of players repeat for multiple periods; and thus the players become aware of other players' past behaviors and their future benefits, and will adapt their behavior accordingly. In wireless networks, conflicts among wireless nodes can lead to selfish behaviors, resulting in poor network performances and detrimental individual payoffs. In this paper, we survey the applications of repeated games in different wireless networks. The main goal is to demonstrate the use of repeated games to encourage wireless nodes to cooperate, thereby improving network performances and avoiding network disruption due to selfish behaviors. Furthermore, various problems in wireless networks and variations of repeated game models together with the corresponding solutions are discussed in this survey. Finally, we outline some open issues and future research directions.Comment: 32 pages, 15 figures, 5 tables, 168 reference

    Demystifying the Scaling Laws of Dense Wireless Networks: No Linear Scaling in Practice

    Full text link
    We optimize the hierarchical cooperation protocol of Ozgur, Leveque and Tse, which is supposed to yield almost linear scaling of the capacity of a dense wireless network with the number of users nn. Exploiting recent results on the optimality of "treating interference as noise" in Gaussian interference channels, we are able to optimize the achievable average per-link rate and not just its scaling law. Our optimized hierarchical cooperation protocol significantly outperforms the originally proposed scheme. On the negative side, we show that even for very large nn, the rate scaling is far from linear, and the optimal number of stages tt is less than 4, instead of tt \rightarrow \infty as required for almost linear scaling. Combining our results and the fact that, beyond a certain user density, the network capacity is fundamentally limited by Maxwell laws, as shown by Francheschetti, Migliore and Minero, we argue that there is indeed no intermediate regime of linear scaling for dense networks in practice.Comment: 5 pages, 6 figures, ISIT 2014. arXiv admin note: substantial text overlap with arXiv:1402.181

    The Balanced Unicast and Multicast Capacity Regions of Large Wireless Networks

    Full text link
    We consider the question of determining the scaling of the n2n^2-dimensional balanced unicast and the n2nn 2^n-dimensional balanced multicast capacity regions of a wireless network with nn nodes placed uniformly at random in a square region of area nn and communicating over Gaussian fading channels. We identify this scaling of both the balanced unicast and multicast capacity regions in terms of Θ(n)\Theta(n), out of 2n2^n total possible, cuts. These cuts only depend on the geometry of the locations of the source nodes and their destination nodes and the traffic demands between them, and thus can be readily evaluated. Our results are constructive and provide optimal (in the scaling sense) communication schemes.Comment: 37 pages, 7 figures, to appear in IEEE Transactions on Information Theor

    Optimal Resource Allocation and Relay Selection in Bandwidth Exchange Based Cooperative Forwarding

    Full text link
    In this paper, we investigate joint optimal relay selection and resource allocation under bandwidth exchange (BE) enabled incentivized cooperative forwarding in wireless networks. We consider an autonomous network where N nodes transmit data in the uplink to an access point (AP) / base station (BS). We consider the scenario where each node gets an initial amount (equal, optimal based on direct path or arbitrary) of bandwidth, and uses this bandwidth as a flexible incentive for two hop relaying. We focus on alpha-fair network utility maximization (NUM) and outage reduction in this environment. Our contribution is two-fold. First, we propose an incentivized forwarding based resource allocation algorithm which maximizes the global utility while preserving the initial utility of each cooperative node. Second, defining the link weight of each relay pair as the utility gain due to cooperation (over noncooperation), we show that the optimal relay selection in alpha-fair NUM reduces to the maximum weighted matching (MWM) problem in a non-bipartite graph. Numerical results show that the proposed algorithms provide 20- 25% gain in spectral efficiency and 90-98% reduction in outage probability.Comment: 8 pages, 7 figure

    Throughput-Delay Trade-off for Hierarchical Cooperation in Ad Hoc Wireless Networks

    Full text link
    Hierarchical cooperation has recently been shown to achieve better throughput scaling than classical multihop schemes under certain assumptions on the channel model in static wireless networks. However, the end-to-end delay of this scheme turns out to be significantly larger than those of multihop schemes. A modification of the scheme is proposed here that achieves a throughput-delay trade-off D(n)=(logn)2T(n)D(n)=(\log n)^2 T(n) for T(n) between Θ(n/logn)\Theta(\sqrt{n}/\log n) and Θ(n/logn)\Theta(n/\log n), where D(n) and T(n) are respectively the average delay per bit and the aggregate throughput in a network of n nodes. This trade-off complements the previous results of El Gamal et al., which show that the throughput-delay trade-off for multihop schemes is given by D(n)=T(n) where T(n) lies between Θ(1)\Theta(1) and Θ(n)\Theta(\sqrt{n}). Meanwhile, the present paper considers the network multiple-access problem, which may be of interest in its own right.Comment: 9 pages, 6 figures, to appear in IEEE Transactions on Information Theory, submitted Dec 200

    Fundamentals of Large Sensor Networks: Connectivity, Capacity, Clocks and Computation

    Full text link
    Sensor networks potentially feature large numbers of nodes that can sense their environment over time, communicate with each other over a wireless network, and process information. They differ from data networks in that the network as a whole may be designed for a specific application. We study the theoretical foundations of such large scale sensor networks, addressing four fundamental issues- connectivity, capacity, clocks and function computation. To begin with, a sensor network must be connected so that information can indeed be exchanged between nodes. The connectivity graph of an ad-hoc network is modeled as a random graph and the critical range for asymptotic connectivity is determined, as well as the critical number of neighbors that a node needs to connect to. Next, given connectivity, we address the issue of how much data can be transported over the sensor network. We present fundamental bounds on capacity under several models, as well as architectural implications for how wireless communication should be organized. Temporal information is important both for the applications of sensor networks as well as their operation.We present fundamental bounds on the synchronizability of clocks in networks, and also present and analyze algorithms for clock synchronization. Finally we turn to the issue of gathering relevant information, that sensor networks are designed to do. One needs to study optimal strategies for in-network aggregation of data, in order to reliably compute a composite function of sensor measurements, as well as the complexity of doing so. We address the issue of how such computation can be performed efficiently in a sensor network and the algorithms for doing so, for some classes of functions.Comment: 10 pages, 3 figures, Submitted to the Proceedings of the IEE

    Relays for Interference Mitigation in Wireless Networks

    Get PDF
    Wireless links play an important role in the last mile network connectivity. In contrast to the strictly centralized approach of today's wireless systems, the future promises decentralization of network management. Nodes potentially engage in localized grouping and organization based on their neighborhood to carry out complex goals such as end-to-end communication. The quadratic energy dissipation of the wireless medium necessitates the presence of certain relay nodes in the network. Conventionally, the role of such relays is limited to passing messages in a chain in a point-point hopping architecture. With the decentralization, multiple nodes could potentially interfere with each other. This work proposes a technique to exploit the presence of relays in a way that mitigates interference between the network nodes. Optimal spatial locations and transmission schemes which enhance this gain are identified

    Opportunistic Relaying in Wireless Networks

    Full text link
    Relay networks having nn source-to-destination pairs and mm half-duplex relays, all operating in the same frequency band in the presence of block fading, are analyzed. This setup has attracted significant attention and several relaying protocols have been reported in the literature. However, most of the proposed solutions require either centrally coordinated scheduling or detailed channel state information (CSI) at the transmitter side. Here, an opportunistic relaying scheme is proposed, which alleviates these limitations. The scheme entails a two-hop communication protocol, in which sources communicate with destinations only through half-duplex relays. The key idea is to schedule at each hop only a subset of nodes that can benefit from \emph{multiuser diversity}. To select the source and destination nodes for each hop, it requires only CSI at receivers (relays for the first hop, and destination nodes for the second hop) and an integer-value CSI feedback to the transmitters. For the case when nn is large and mm is fixed, it is shown that the proposed scheme achieves a system throughput of m/2m/2 bits/s/Hz. In contrast, the information-theoretic upper bound of (m/2)loglogn(m/2)\log \log n bits/s/Hz is achievable only with more demanding CSI assumptions and cooperation between the relays. Furthermore, it is shown that, under the condition that the product of block duration and system bandwidth scales faster than logn\log n, the achievable throughput of the proposed scheme scales as Θ(logn)\Theta ({\log n}). Notably, this is proven to be the optimal throughput scaling even if centralized scheduling is allowed, thus proving the optimality of the proposed scheme in the scaling law sense.Comment: 17 pages, 8 figures, To appear in IEEE Transactions on Information Theor
    corecore