5,093 research outputs found

    Is Our Model for Contention Resolution Wrong?

    Full text link
    Randomized binary exponential backoff (BEB) is a popular algorithm for coordinating access to a shared channel. With an operational history exceeding four decades, BEB is currently an important component of several wireless standards. Despite this track record, prior theoretical results indicate that under bursty traffic (1) BEB yields poor makespan and (2) superior algorithms are possible. To date, the degree to which these findings manifest in practice has not been resolved. To address this issue, we examine one of the strongest cases against BEB: nn packets that simultaneously begin contending for the wireless channel. Using Network Simulator 3, we compare against more recent algorithms that are inspired by BEB, but whose makespan guarantees are superior. Surprisingly, we discover that these newer algorithms significantly underperform. Through further investigation, we identify as the culprit a flawed but common abstraction regarding the cost of collisions. Our experimental results are complemented by analytical arguments that the number of collisions -- and not solely makespan -- is an important metric to optimize. We believe that these findings have implications for the design of contention-resolution algorithms.Comment: Accepted to the 29th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2017

    Random Access Protocol for Massive MIMO: Strongest-User Collision Resolution (SUCR)

    Full text link
    Wireless networks with many antennas at the base stations and multiplexing of many users, known as Massive MIMO systems, are key to handle the rapid growth of data traffic. As the number of users increases, the random access in contemporary networks will be flooded by user collisions. In this paper, we propose a reengineered random access protocol, coined strongest-user collision resolution (SUCR). It exploits the channel hardening feature of Massive MIMO channels to enable each user to detect collisions, determine how strong the contenders' channels are, and only keep transmitting if it has the strongest channel gain. The proposed SUCR protocol can quickly and distributively resolve the vast majority of all pilot collisions.Comment: Published at the IEEE International Conference on Communications (ICC), 2016, 6 pages, 6 figures. (c) 2016 IEEE. Personal use of this material is permitte

    A Random Access Protocol for Pilot Allocation in Crowded Massive MIMO Systems

    Full text link
    The Massive MIMO (multiple-input multiple-output) technology has great potential to manage the rapid growth of wireless data traffic. Massive MIMO achieves tremendous spectral efficiency by spatial multiplexing of many tens of user equipments (UEs). These gains are only achieved in practice if many more UEs can connect efficiently to the network than today. As the number of UEs increases, while each UE intermittently accesses the network, the random access functionality becomes essential to share the limited number of pilots among the UEs. In this paper, we revisit the random access problem in the Massive MIMO context and develop a reengineered protocol, termed strongest-user collision resolution (SUCRe). An accessing UE asks for a dedicated pilot by sending an uncoordinated random access pilot, with a risk that other UEs send the same pilot. The favorable propagation of Massive MIMO channels is utilized to enable distributed collision detection at each UE, thereby determining the strength of the contenders' signals and deciding to repeat the pilot if the UE judges that its signal at the receiver is the strongest. The SUCRe protocol resolves the vast majority of all pilot collisions in crowded urban scenarios and continues to admit UEs efficiently in overloaded networks.Comment: To appear in IEEE Transactions on Wireless Communications, 16 pages, 10 figures. This is reproducible research with simulation code available at https://github.com/emilbjornson/sucre-protoco

    Characterization of Coded Random Access with Compressive Sensing based Multi-User Detection

    Get PDF
    The emergence of Machine-to-Machine (M2M) communication requires new Medium Access Control (MAC) schemes and physical (PHY) layer concepts to support a massive number of access requests. The concept of coded random access, introduced recently, greatly outperforms other random access methods and is inherently capable to take advantage of the capture effect from the PHY layer. Furthermore, at the PHY layer, compressive sensing based multi-user detection (CS-MUD) is a novel technique that exploits sparsity in multi-user detection to achieve a joint activity and data detection. In this paper, we combine coded random access with CS-MUD on the PHY layer and show very promising results for the resulting protocol.Comment: Submitted to Globecom 201

    The complexity of resolving conflicts on MAC

    Full text link
    We consider the fundamental problem of multiple stations competing to transmit on a multiple access channel (MAC). We are given nn stations out of which at most dd are active and intend to transmit a message to other stations using MAC. All stations are assumed to be synchronized according to a time clock. If ll stations node transmit in the same round, then the MAC provides the feedback whether l=0l=0, l=2l=2 (collision occurred) or l=1l=1. When l=1l=1, then a single station is indeed able to successfully transmit a message, which is received by all other nodes. For the above problem the active stations have to schedule their transmissions so that they can singly, transmit their messages on MAC, based only on the feedback received from the MAC in previous round. For the above problem it was shown in [Greenberg, Winograd, {\em A Lower bound on the Time Needed in the Worst Case to Resolve Conflicts Deterministically in Multiple Access Channels}, Journal of ACM 1985] that every deterministic adaptive algorithm should take Ω(d(lgn)/(lgd))\Omega(d (\lg n)/(\lg d)) rounds in the worst case. The fastest known deterministic adaptive algorithm requires O(dlgn)O(d \lg n) rounds. The gap between the upper and lower bound is O(lgd)O(\lg d) round. It is substantial for most values of dd: When d=d = constant and dO(nϵ)d \in O(n^{\epsilon}) (for any constant ϵ1\epsilon \leq 1, the lower bound is respectively O(lgn)O(\lg n) and O(n), which is trivial in both cases. Nevertheless, the above lower bound is interesting indeed when dd \in poly(lgn\lg n). In this work, we present a novel counting argument to prove a tight lower bound of Ω(dlgn)\Omega(d \lg n) rounds for all deterministic, adaptive algorithms, closing this long standing open question.}Comment: Xerox internal report 27th July; 7 page
    corecore