217,917 research outputs found

    Low-Power Random Access for Timely Status Update: Packet-based or Connection-based?

    Full text link
    This paper investigates low-power random access protocols for timely status update systems with age of information (AoI) requirements. AoI characterizes information freshness, formally defined as the time elapsed since the generation of the last successfully received update. Considering an extensive network, a fundamental problem is how to schedule massive transmitters to access the wireless channel to achieve low network-wide AoI and high energy efficiency. In conventional packet-based random access protocols, transmitters contend for the channel by sending the whole data packet. When the packet duration is long, the time and transmit power wasted due to packet collisions is considerable. In contrast, connection-based random access protocols first establish connections with the receiver before the data packet is transmitted. Intuitively, from an information freshness perspective, there should be conditions favoring either side. This paper presents a comparative study of the average AoI of packet-based and connection-based random access protocols, given an average transmit power budget. Specifically, we consider slotted Aloha (SA) and frame slotted Aloha (FSA) as representatives of packet-based random access and design a request-then-access (RTA) protocol to study the AoI of connection-based random access. We derive closed-form average AoI and average transmit power consumption formulas for different protocols. Our analyses indicate that the use of packet-based or connection-based protocols depends mainly on the payload size of update packets and the transmit power budget. In particular, RTA saves power and reduces AoI significantly, especially when the payload size is large. Overall, our investigation provides insights into the practical design of random access protocols for low-power timely status update systems

    Design and analysis of LTE and wi-fi schemes for communications of massive machine devices

    Get PDF
    Existing communication technologies are designed with speciÿc use cases in mind, however, ex-tending these use cases usually throw up interesting challenges. For example, extending the use of existing cellular networks to emerging applications such as Internet of Things (IoT) devices throws up the challenge of handling massive number of devices. In this thesis, we are motivated to investigate existing schemes used in LTE and Wi-Fi for supporting massive machine devices and improve on observed performance gaps by designing new ones that outperform the former. This thesis investigates the existing random access protocol in LTE and proposes three schemes to combat massive device access challenge. The ÿrst is a root index reuse and allocation scheme which uses link budget calculations in extracting a safe distance for preamble reuse under vari-able cell size and also proposes an index allocation algorithm. Secondly, a dynamic subframe optimization scheme that combats the challenge from an optimisation solution perspective. Thirdly, the use of small cells for random access. Simulation and numerical analysis shows performance improvements against existing schemes in terms of throughput, access delay and probability of collision. In some cases, over 20% increase in performance was observed. The proposed schemes provide quicker and more guaranteed opportunities for machine devices to communicate. Also, in Wi-Fi networks, adaptation of the transmission rates to the dynamic channel condi-tions is a major challenge. Two algorithms were proposed to combat this. The ÿrst makes use of contextual information to determine the network state and respond appropriately whilst the second samples candidate transmission modes and uses the e˛ective throughput to make a deci-sion. The proposed algorithms were compared to several existing rate adaptation algorithms by simulations and under various system and channel conÿgurations. They show signiÿcant per-formance improvements, in terms of throughput, thus, conÿrming their suitability for dynamic channel conditions

    Two-phase Unsourced Random Access in Massive MIMO: Performance Analysis and Approximate Message Passing Decoder

    Full text link
    In this paper, we design a novel two-phase unsourced random access (URA) scheme in massive multiple input multiple output (MIMO). In the first phase, we collect a sequence of information bits to jointly acquire the user channel state information (CSI) and the associated information bits. In the second phase, the residual information bits of all the users are partitioned into sub-blocks with a very short length to exhibit a higher spectral efficiency and a lower computational complexity than the existing transmission schemes in massive MIMO URA. By using the acquired CSI in the first phase, the sub-block recovery in the second phase is cast as a compressed sensing (CS) problem. From the perspective of the statistical physics, we provide a theoretical framework for our proposed URA scheme to analyze the induced problem based on the replica method. The analytical results show that the performance metrics of our URA scheme can be linked to the system parameters by a single-valued free entropy function. An AMP-based recovery algorithm is designed to achieve the performance indicated by the proposed theoretical framework. Simulations verify that our scheme outperforms the most recent counterparts.Comment: 16pages,7 figure

    Massive MIMO for Internet of Things (IoT) Connectivity

    Full text link
    Massive MIMO is considered to be one of the key technologies in the emerging 5G systems, but also a concept applicable to other wireless systems. Exploiting the large number of degrees of freedom (DoFs) of massive MIMO essential for achieving high spectral efficiency, high data rates and extreme spatial multiplexing of densely distributed users. On the one hand, the benefits of applying massive MIMO for broadband communication are well known and there has been a large body of research on designing communication schemes to support high rates. On the other hand, using massive MIMO for Internet-of-Things (IoT) is still a developing topic, as IoT connectivity has requirements and constraints that are significantly different from the broadband connections. In this paper we investigate the applicability of massive MIMO to IoT connectivity. Specifically, we treat the two generic types of IoT connections envisioned in 5G: massive machine-type communication (mMTC) and ultra-reliable low-latency communication (URLLC). This paper fills this important gap by identifying the opportunities and challenges in exploiting massive MIMO for IoT connectivity. We provide insights into the trade-offs that emerge when massive MIMO is applied to mMTC or URLLC and present a number of suitable communication schemes. The discussion continues to the questions of network slicing of the wireless resources and the use of massive MIMO to simultaneously support IoT connections with very heterogeneous requirements. The main conclusion is that massive MIMO can bring benefits to the scenarios with IoT connectivity, but it requires tight integration of the physical-layer techniques with the protocol design.Comment: Submitted for publicatio

    逐次干渉除去を用いた多元接続システムのパワー割り当てに関する研究

    Get PDF
    In future wireless communication networks, the number of devices is likely to increase dramatically due to potential development of new applications such as the Internet of Things (IoT). Consequently, radio access network is required to support multiple access of massive users and achieve high spectral efficiency. From the information theoretic perspective, orthogonal multiple access protocols are suboptimal. To achieve the multiple access capacity, non-orthogonal multiple access protocols and multiuser detection (MUD) are required. For the non-orthogonal code-division multiple access (CDMA), several MUD techniques have been proposed to improve the spectrum efficiency. Successive interference cancellation (SIC) is a promising MUD techniques due to its low complexity and good decoding performance. Random access protocols are designed for the system with bursty traffic to reduce the delay, compared to the channelized multiple access. Since the users contend for the channel instead of being assigned by the base station (BS), collisions happen with a certain probability. If the traffic load becomes relatively high, the throughput of these schemes steeply falls down because of collisions. However, it has been well-recognized that more complex procedures can permit decoding of interfering signals, which is referred to as multi-packet reception (MPR). Also, an SIC decoder might decode more packets by successively subtracting the correctly decoded packets from the collision. Cognitive radio (CR) is an emerging technology to solve the problem of spectrum scarcity by dynamically sharing the spectrum. In the CR networks, the secondary users (SUs) are allowed to dynamically share the frequency bands with primary users (PUs) under primary quality-of-service (QoS) protection such as the constraint of interference temperature at the primary base station (PBS). For the uplink multiple access to the secondary base station (SBS), transmit power allocation for the SUs is critical to control the interference temperature at the PBS. Transmit power allocation has been extensively studied in various multiple access scenarios. The power allocation algorithms can be classified into two types, depending on whether the process is controlled by the base station (BS). For the centralized power allocation (CPA) algorithms, the BS allocates the transmit powers to the users through the downlink channels. For the random access protocols, there are also efforts on decentralized power allocation (DPA) that the users select transmit powers according to given distributions of power and probability, instead of being assigned the transmit power at each time slot by the BS. In this dissertation, the DPA algorithms for the random access protocols with SIC are investigated and new methods are proposed. First a decentralized multilevel power allocation algorithm to improve the MAC throughput performance is proposed, for the general SIC receiver that can decode multiple packets from one collision. Then an improved DPA algorithm to maximize the overall system sum rate is proposed, taking into account of both the MAC layer and PHY layer. Finally, a DPA algorithm for the CR secondary random access is proposed, considering the constraint of interference temperature and the practical assumption of imperfect cancellation. An opportunistic transmission protocol for the fading environment to further reduce the interference temperature is also proposed. For the future work, the optimal DPA for the random access with the SIC receiver is still an open problem. Besides, advanced multiple access schemes that aim to approach the multiple access capacity by combining the advantages of the network coded cooperation, the repetition slotted ALOHA, and the SIC receiver are also interesting.電気通信大学201

    Data Aggregation and Packet Bundling of Uplink Small Packets for Monitoring Applications in LTE

    Full text link
    In cellular massive Machine-Type Communications (MTC), a device can transmit directly to the base station (BS) or through an aggregator (intermediate node). While direct device-BS communication has recently been in the focus of 5G/3GPP research and standardization efforts, the use of aggregators remains a less explored topic. In this paper we analyze the deployment scenarios in which aggregators can perform cellular access on behalf of multiple MTC devices. We study the effect of packet bundling at the aggregator, which alleviates overhead and resource waste when sending small packets. The aggregators give rise to a tradeoff between access congestion and resource starvation and we show that packet bundling can minimize resource starvation, especially for smaller numbers of aggregators. Under the limitations of the considered model, we investigate the optimal settings of the network parameters, in terms of number of aggregators and packet-bundle size. Our results show that, in general, data aggregation can benefit the uplink massive MTC in LTE, by reducing the signalling overhead.Comment: to appear in IEEE Networ
    corecore