42 research outputs found

    Performance analysis of feedback-free collision resolution NDMA protocol

    Get PDF
    To support communications of a large number of deployed devices while guaranteeing limited signaling load, low energy consumption, and high reliability, future cellular systems require efficient random access protocols. However, how to address the collision resolution at the receiver is still the main bottleneck of these protocols. The network-assisted diversity multiple access (NDMA) protocol solves the issue and attains the highest potential throughput at the cost of keeping devices active to acquire feedback and repeating transmissions until successful decoding. In contrast, another potential approach is the feedback-free NDMA (FF-NDMA) protocol, in which devices do repeat packets in a pre-defined number of consecutive time slots without waiting for feedback associated with repetitions. Here, we investigate the FF-NDMA protocol from a cellular network perspective in order to elucidate under what circumstances this scheme is more energy efficient than NDMA. We characterize analytically the FF-NDMA protocol along with the multipacket reception model and a finite Markov chain. Analytic expressions for throughput, delay, capture probability, energy, and energy efficiency are derived. Then, clues for system design are established according to the different trade-offs studied. Simulation results show that FF-NDMA is more energy efficient than classical NDMA and HARQ-NDMA at low signal-to-noise ratio (SNR) and at medium SNR when the load increases.Peer ReviewedPostprint (published version

    Ultra reliable communication via optimum power allocation for repetition and parallel coding in finite block-length

    Get PDF
    Abstract. In this thesis we evaluate the performance of several retransmission mechanisms with ultra-reliability constraints. First, we show that achieving a very low packet outage probability by using an open loop setup is a difficult task. Thus, we resort to retransmission schemes as a solution for achieving the required low outage probabilities for ultra reliable communication. We analyze three retransmission protocols, namely Type-1 Automatic Repeat Request (ARQ), Chase Combining Hybrid ARQ (CC-HARQ) and Incremental Redundancy (IR) HARQ. For these protocols, we develop optimal power allocation algorithms that would allow us to reach any outage probability target in the finite block-length regime. We formulate the power allocation problem as minimization of the average transmitted power under a given outage probability and maximum transmit power constraint. By utilizing the Karush-Kuhn-Tucker (KKT) conditions, we solve the optimal power allocation problem and provide closed form solutions. Next, we analyze the effect of implementing these protocols on the throughput of the system. We show that by using the proposed power allocation scheme we can minimize the loss of throughput that is caused from the retransmissions. Furthermore, we analyze the effect of the feedback delay length in our protocols.Optimaalista tehoallokointia toisto- ja rinnakkaiskoodaukseen käyttävä erittäin luotettava tiedonsiirto äärellisillä lohkonpituuksilla. Tiivistelmä. Tässä työssä arvioidaan usean uudelleenlähetysmenetelmän suorituskykyä erittäin luotettavan tietoliikenteen järjestelmäoletuksin. Aluksi osoitetaan, että hyvin alhaisen pakettilähetysten katkostodennäköisyyden saavuttaminen avoimen silmukan menetelmillä on haastava tehtävä. Niinpä työssä turvaudutaan uudelleenlähetyspohjaisiin ratkaisuihin, joilla on mahdollista päästä suuren luotettavuuden edellyttämiin hyvin alhaisiin katkostodennäköisyyksiin. Työssä analysoidaan kolmea uudelleenlähetysprotokollaa, nimittäin tyypin 1 automaattista uudelleen lähetystä (ARQ), Chase Combining -tyyppistä hybridi-ARQ -protokollaa (CC-HARQ) ja redundanssia lisäävää HARQ-protokollaa (IR-HARQ). Näille protokollille kehitetään optimaalisia tehon allokointialgoritmeja, joiden avulla päästään halutulle katkostodennäköisyystasolle äärellisillä lohkonpituuksilla. Tehon allokointiongelma muotoillaan keskimääräisen lähetystehon minimointiongelmaksi toteuttaen halutun katkostodennäköisyyden ja maksimilähetystehorajoituksen. Käyttämällä Karush-Kuhn-Tucker (KKT) -ehtoja ratkaistaan optimaalinen tehoallokointiongelma ja esitetään ratkaisut suljetussa muodossa. Seuraavaksi analysoidaan näiden protokollien järjestelmätason toteutusta läpäisykykytarkastelujen avulla. Niillä osoitetaan, että ehdotetulla tehon allokointimenetelmällä voidaan minimoida uudelleen lähetyksistä aiheutuvia suorituskykyhäviöitä. Lisäksi työssä tutkitaan takaisinkytkentäviiveen vaikutusta esitettyihin protokolliin

    Ultra-Reliable Short Message Cooperative Relaying Protocols under Nakagami-m Fading

    Full text link
    In the next few years, the development of wireless communication systems propel the world into a fully connected society where the Machine-type Communications (MTC) plays a substantial role as key enabler in the future cellular systems. MTC is categorized into mMTC and uMTC, where mMTC provides the connectivity to massive number of devices while uMTC is related to low latency and ultra-high reliability of the wireless communications. This paper studies uMTC with incremental relaying technique, where the source and relay collaborate to transfer the message to a destination. In this paper, we compare the performance of two distinct cooperative relaying protocols with the direct transmission under the finite blocklength (FB) regime. We define the overall outage probability in each relaying scenario, supposing Nakagami-m fading. We show that cooperative communication outperforms direct transmission under the FB regime. In addition, we examine the impact of fading severity and power allocation factor on the outage probability and the minimum delay required to meet the ultra-reliable communication requirements. Moreover, we provide the outage probability in closed form

    Enabling Technologies for Ultra-Reliable and Low Latency Communications: From PHY and MAC Layer Perspectives

    Full text link
    © 1998-2012 IEEE. Future 5th generation networks are expected to enable three key services-enhanced mobile broadband, massive machine type communications and ultra-reliable and low latency communications (URLLC). As per the 3rd generation partnership project URLLC requirements, it is expected that the reliability of one transmission of a 32 byte packet will be at least 99.999% and the latency will be at most 1 ms. This unprecedented level of reliability and latency will yield various new applications, such as smart grids, industrial automation and intelligent transport systems. In this survey we present potential future URLLC applications, and summarize the corresponding reliability and latency requirements. We provide a comprehensive discussion on physical (PHY) and medium access control (MAC) layer techniques that enable URLLC, addressing both licensed and unlicensed bands. This paper evaluates the relevant PHY and MAC techniques for their ability to improve the reliability and reduce the latency. We identify that enabling long-term evolution to coexist in the unlicensed spectrum is also a potential enabler of URLLC in the unlicensed band, and provide numerical evaluations. Lastly, this paper discusses the potential future research directions and challenges in achieving the URLLC requirements

    Information-Theoretic Aspects of Low-Latency Communications

    Get PDF

    Downlink Transmission of Short Packets: Framing and Control Information Revisited

    Full text link
    Cellular wireless systems rely on frame-based transmissions. The frame design is conventionally based on heuristics, consisting of a frame header and a data part. The frame header contains control information that provides pointers to the messages within the data part. In this paper, we revisit the principles of frame design and show the impact of the new design in scenarios that feature short data packets which are central to various 5G and Internet of Things applications. We treat framing for downlink transmission in an AWGN broadcast channel with K users, where the sizes of the messages to the users are random variables. Using approximations from finite blocklength information theory, we establish a framework in which a message to a given user is not necessarily encoded as a single packet, but may be grouped with the messages to other users and benefit from the improved efficiency of longer codes. This requires changes in the way control information is sent, and it requires that the users need to spend power decoding other messages, thereby increasing the average power consumption. We show that the common heuristic design is only one point on a curve that represents the trade-off between latency and power consumption.Comment: 10 page
    corecore