4,155 research outputs found

    Enhancing IEEE 802.11MAC in congested environments

    Get PDF
    IEEE 802.11 is currently the most deployed wireless local area networking standard. It uses carrier sense multiple access with collision avoidance (CSMA/CA) to resolve contention between nodes. Contention windows (CW) change dynamically to adapt to the contention level: Upon each collision, a node doubles its CW to reduce further collision risks. Upon a successful transmission, the CW is reset, assuming that the contention level has dropped. However, the contention level is more likely to change slowly, and resetting the CW causes new collisions and retransmissions before the CW reaches the optimal value again. This wastes bandwidth and increases delays. In this paper we analyze simple slow CW decrease functions and compare their performances to the legacy standard. We use simulations and mathematical modeling to show their considerable improvements at all contention levels and transient phases, especially in highly congested environments

    Performance analysis of contention based bandwidth request mechanisms in WiMAX networks

    Get PDF
    This article is posted here with the permission of IEEE. The official version can be obtained from the DOI below - Copyright @ 2010 IEEEWiMAX networks have received wide attention as they support high data rate access and amazing ubiquitous connectivity with great quality-of-service (QoS) capabilities. In order to support QoS, bandwidth request (BW-REQ) mechanisms are suggested in the WiMAX standard for resource reservation, in which subscriber stations send BW-REQs to a base station which can grant or reject the requests according to the available radio resources. In this paper we propose a new analytical model for the performance analysis of various contention based bandwidth request mechanisms, including grouping and no-grouping schemes, as suggested in the WiMAX standard. Our analytical model covers both unsaturated and saturated traffic load conditions in both error-free and error-prone wireless channels. The accuracy of this model is verified by various simulation results. Our results show that the grouping mechanism outperforms the no-grouping mechanism when the system load is high, but it is not preferable when the system load is light. The channel noise degrades the performance of both throughput and delay.This work was supported by the U.K. Engineering and Physical Sciences Research Council (EPSRC) under Grant EP/G070350/1 and by the Brunel University’s BRIEF Award

    Dynamic Packet Scheduling in Wireless Networks

    Full text link
    We consider protocols that serve communication requests arising over time in a wireless network that is subject to interference. Unlike previous approaches, we take the geometry of the network and power control into account, both allowing to increase the network's performance significantly. We introduce a stochastic and an adversarial model to bound the packet injection. Although taken as the primary motivation, this approach is not only suitable for models based on the signal-to-interference-plus-noise ratio (SINR). It also covers virtually all other common interference models, for example the multiple-access channel, the radio-network model, the protocol model, and distance-2 matching. Packet-routing networks allowing each edge or each node to transmit or receive one packet at a time can be modeled as well. Starting from algorithms for the respective scheduling problem with static transmission requests, we build distributed stable protocols. This is more involved than in previous, similar approaches because the algorithms we consider do not necessarily scale linearly when scaling the input instance. We can guarantee a throughput that is as large as the one of the original static algorithm. In particular, for SINR models the competitive ratios of the protocol in comparison to optimal ones in the respective model are between constant and O(log^2 m) for a network of size m.Comment: 23 page

    On the Reliability of LTE Random Access: Performance Bounds for Machine-to-Machine Burst Resolution Time

    Full text link
    Random Access Channel (RACH) has been identified as one of the major bottlenecks for accommodating massive number of machine-to-machine (M2M) users in LTE networks, especially for the case of burst arrival of connection requests. As a consequence, the burst resolution problem has sparked a large number of works in the area, analyzing and optimizing the average performance of RACH. However, the understanding of what are the probabilistic performance limits of RACH is still missing. To address this limitation, in the paper, we investigate the reliability of RACH with access class barring (ACB). We model RACH as a queuing system, and apply stochastic network calculus to derive probabilistic performance bounds for burst resolution time, i.e., the worst case time it takes to connect a burst of M2M devices to the base station. We illustrate the accuracy of the proposed methodology and its potential applications in performance assessment and system dimensioning.Comment: Presented at IEEE International Conference on Communications (ICC), 201

    Is Our Model for Contention Resolution Wrong?

    Full text link
    Randomized binary exponential backoff (BEB) is a popular algorithm for coordinating access to a shared channel. With an operational history exceeding four decades, BEB is currently an important component of several wireless standards. Despite this track record, prior theoretical results indicate that under bursty traffic (1) BEB yields poor makespan and (2) superior algorithms are possible. To date, the degree to which these findings manifest in practice has not been resolved. To address this issue, we examine one of the strongest cases against BEB: nn packets that simultaneously begin contending for the wireless channel. Using Network Simulator 3, we compare against more recent algorithms that are inspired by BEB, but whose makespan guarantees are superior. Surprisingly, we discover that these newer algorithms significantly underperform. Through further investigation, we identify as the culprit a flawed but common abstraction regarding the cost of collisions. Our experimental results are complemented by analytical arguments that the number of collisions -- and not solely makespan -- is an important metric to optimize. We believe that these findings have implications for the design of contention-resolution algorithms.Comment: Accepted to the 29th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2017

    General distributions in process algebra

    Get PDF
    corecore