35 research outputs found

    Medium access control protocol design for wireless communications and networks review

    Get PDF
    Medium access control (MAC) protocol design plays a crucial role to increase the performance of wireless communications and networks. The channel access mechanism is provided by MAC layer to share the medium by multiple stations. Different types of wireless networks have different design requirements such as throughput, delay, power consumption, fairness, reliability, and network density, therefore, MAC protocol for these networks must satisfy their requirements. In this work, we proposed two multiplexing methods for modern wireless networks: Massive multiple-input-multiple-output (MIMO) and power domain non-orthogonal multiple access (PD-NOMA). The first research method namely Massive MIMO uses a massive number of antenna elements to improve both spectral efficiency and energy efficiency. On the other hand, the second research method (PD-NOMA) allows multiple non-orthogonal signals to share the same orthogonal resources by allocating different power level for each station. PD-NOMA has a better spectral efficiency over the orthogonal multiple access methods. A review of previous works regarding the MAC design for different wireless networks is classified based on different categories. The main contribution of this research work is to show the importance of the MAC design with added optimal functionalities to improve the spectral and energy efficiencies of the wireless networks

    An antenna switching based NOMA scheme for IEEE 802.15.4 concurrent transmission

    No full text
    This paper introduces a Non-Orthogonal Multiple Access (NOMA) scheme to support concurrent transmission of multiple IEEE 802.15.4 packets. Unlike collision avoidance Multiple Access Control (MAC), concurrent transmission supports Concurrent-MAC (C-MAC) where packet collision is allowed. The communication latency can be reduced by C-MAC because a user can transmit immediately without waiting for the completion of other users’ transmission. The big challenge of concurrent transmission is that error free demodulation of multiple collided packets hardly can be achieved due to severe Multiple Access Interference (MAI). To improve the demodulation performance with MAI presented, we introduce an architecture with multiple switching antennas sharing a single analog transceiver to capture spatial character of different users. Successive Interference Cancellation (SIC) algorithm is designed to separate collided packets by utilizing the spatial character. Simulation shows that at least five users can transmit concurrently to the SIC receiver equipped with eight antennas without sacrificing Packet Error Rate

    Low-latency Networking: Where Latency Lurks and How to Tame It

    Full text link
    While the current generation of mobile and fixed communication networks has been standardized for mobile broadband services, the next generation is driven by the vision of the Internet of Things and mission critical communication services requiring latency in the order of milliseconds or sub-milliseconds. However, these new stringent requirements have a large technical impact on the design of all layers of the communication protocol stack. The cross layer interactions are complex due to the multiple design principles and technologies that contribute to the layers' design and fundamental performance limitations. We will be able to develop low-latency networks only if we address the problem of these complex interactions from the new point of view of sub-milliseconds latency. In this article, we propose a holistic analysis and classification of the main design principles and enabling technologies that will make it possible to deploy low-latency wireless communication networks. We argue that these design principles and enabling technologies must be carefully orchestrated to meet the stringent requirements and to manage the inherent trade-offs between low latency and traditional performance metrics. We also review currently ongoing standardization activities in prominent standards associations, and discuss open problems for future research

    Multiple Access for Massive Machine Type Communications

    Get PDF
    The internet we have known thus far has been an internet of people, as it has connected people with one another. However, these connections are forecasted to occupy only a minuscule of future communications. The internet of tomorrow is indeed: the internet of things. The Internet of Things (IoT) promises to improve all aspects of life by connecting everything to everything. An enormous amount of effort is being exerted to turn these visions into a reality. Sensors and actuators will communicate and operate in an automated fashion with no or minimal human intervention. In the current literature, these sensors and actuators are referred to as machines, and the communication amongst these machines is referred to as Machine to Machine (M2M) communication or Machine-Type Communication (MTC). As IoT requires a seamless mode of communication that is available anywhere and anytime, wireless communications will be one of the key enabling technologies for IoT. In existing wireless cellular networks, users with data to transmit first need to request channel access. All access requests are processed by a central unit that in return either grants or denies the access request. Once granted access, users' data transmissions are non-overlapping and interference free. However, as the number of IoT devices is forecasted to be in the order of hundreds of millions, if not billions, in the near future, the access channels of existing cellular networks are predicted to suffer from severe congestion and, thus, incur unpredictable latencies in the system. On the other hand, in random access, users with data to transmit will access the channel in an uncoordinated and probabilistic fashion, thus, requiring little or no signalling overhead. However, this reduction in overhead is at the expense of reliability and efficiency due to the interference caused by contending users. In most existing random access schemes, packets are lost when they experience interference from other packets transmitted over the same resources. Moreover, most existing random access schemes are best-effort schemes with almost no Quality of Service (QoS) guarantees. In this thesis, we investigate the performance of different random access schemes in different settings to resolve the problem of the massive access of IoT devices with diverse QoS guarantees. First, we take a step towards re-designing existing random access protocols such that they are more practical and more efficient. For many years, researchers have adopted the collision channel model in random access schemes: a collision is the event of two or more users transmitting over the same time-frequency resources. In the event of a collision, all the involved data is lost, and users need to retransmit their information. However, in practice, data can be recovered even in the presence of interference provided that the power of the signal is sufficiently larger than the power of the noise and the power of the interference. Based on this, we re-define the event of collision as the event of the interference power exceeding a pre-determined threshold. We propose a new analytical framework to compute the probability of packet recovery failure inspired by error control codes on graph. We optimize the random access parameters based on evolution strategies. Our results show a significant improvement in performance in terms of reliability and efficiency. Next, we focus on supporting the heterogeneous IoT applications and accommodating their diverse latency and reliability requirements in a unified access scheme. We propose a multi-stage approach where each group of applications transmits in different stages with different probabilities. We propose a new analytical framework to compute the probability of packet recovery failure for each group in each stage. We also optimize the random access parameters using evolution strategies. Our results show that our proposed scheme can outperform coordinated access schemes of existing cellular networks when the number of users is very large. Finally, we investigate random non-orthogonal multiple access schemes that are known to achieve a higher spectrum efficiency and are known to support higher loads. In our proposed scheme, user detection and channel estimation are carried out via pilot sequences that are transmitted simultaneously with the user's data. Here, a collision event is defined as the event of two or more users selecting the same pilot sequence. All collisions are regarded as interference to the remaining users. We first study the distribution of the interference power and derive its expression. Then, we use this expression to derive simple yet accurate analytical bounds on the throughput and outage probability of the proposed scheme. We consider both joint decoding as well as successive interference cancellation. We show that the proposed scheme is especially useful in the case of short packet transmission

    Active Terminal Identification, Channel Estimation, and Signal Detection for Grant-Free NOMA-OTFS in LEO Satellite Internet-of-Things

    Full text link
    This paper investigates the massive connectivity of low Earth orbit (LEO) satellite-based Internet-of-Things (IoT) for seamless global coverage. We propose to integrate the grant-free non-orthogonal multiple access (GF-NOMA) paradigm with the emerging orthogonal time frequency space (OTFS) modulation to accommodate the massive IoT access, and mitigate the long round-trip latency and severe Doppler effect of terrestrial-satellite links (TSLs). On this basis, we put forward a two-stage successive active terminal identification (ATI) and channel estimation (CE) scheme as well as a low-complexity multi-user signal detection (SD) method. Specifically, at the first stage, the proposed training sequence aided OTFS (TS-OTFS) data frame structure facilitates the joint ATI and coarse CE, whereby both the traffic sparsity of terrestrial IoT terminals and the sparse channel impulse response are leveraged for enhanced performance. Moreover, based on the single Doppler shift property for each TSL and sparsity of delay-Doppler domain channel, we develop a parametric approach to further refine the CE performance. Finally, a least square based parallel time domain SD method is developed to detect the OTFS signals with relatively low complexity. Simulation results demonstrate the superiority of the proposed methods over the state-of-the-art solutions in terms of ATI, CE, and SD performance confronted with the long round-trip latency and severe Doppler effect.Comment: 20 pages, 9 figures, accepted by IEEE Transactions on Wireless Communication

    Fine-grained performance analysis of massive MTC networks with scheduling and data aggregation

    Get PDF
    Abstract. The Internet of Things (IoT) represents a substantial shift within wireless communication and constitutes a relevant topic of social, economic, and overall technical impact. It refers to resource-constrained devices communicating without or with low human intervention. However, communication among machines imposes several challenges compared to traditional human type communication (HTC). Moreover, as the number of devices increases exponentially, different network management techniques and technologies are needed. Data aggregation is an efficient approach to handle the congestion introduced by a massive number of machine type devices (MTDs). The aggregators not only collect data but also implement scheduling mechanisms to cope with scarce network resources. This thesis provides an overview of the most common IoT applications and the network technologies to support them. We describe the most important challenges in machine type communication (MTC). We use a stochastic geometry (SG) tool known as the meta distribution (MD) of the signal-to-interference ratio (SIR), which is the distribution of the conditional SIR distribution given the wireless nodes’ locations, to provide a fine-grained description of the per-link reliability. Specifically, we analyze the performance of two scheduling methods for data aggregation of MTC: random resource scheduling (RRS) and channel-aware resource scheduling (CRS). The results show the fraction of users in the network that achieves a target reliability, which is an important aspect to consider when designing wireless systems with stringent service requirements. Finally, the impact on the fraction of MTDs that communicate with a target reliability when increasing the aggregators density is investigated

    Internet of Things and Sensors Networks in 5G Wireless Communications

    Get PDF
    This book is a printed edition of the Special Issue Internet of Things and Sensors Networks in 5G Wireless Communications that was published in Sensors

    Internet of Things and Sensors Networks in 5G Wireless Communications

    Get PDF
    The Internet of Things (IoT) has attracted much attention from society, industry and academia as a promising technology that can enhance day to day activities, and the creation of new business models, products and services, and serve as a broad source of research topics and ideas. A future digital society is envisioned, composed of numerous wireless connected sensors and devices. Driven by huge demand, the massive IoT (mIoT) or massive machine type communication (mMTC) has been identified as one of the three main communication scenarios for 5G. In addition to connectivity, computing and storage and data management are also long-standing issues for low-cost devices and sensors. The book is a collection of outstanding technical research and industrial papers covering new research results, with a wide range of features within the 5G-and-beyond framework. It provides a range of discussions of the major research challenges and achievements within this topic

    A Survey on Non-Geostationary Satellite Systems: The Communication Perspective

    Get PDF
    The next phase of satellite technology is being characterized by a new evolution in non-geostationary orbit (NGSO) satellites, which conveys exciting new communication capabilities to provide non-terrestrial connectivity solutions and to support a wide range of digital technologies from various industries. NGSO communication systems are known for a number of key features such as lower propagation delay, smaller size, and lower signal losses in comparison to the conventional geostationary orbit (GSO) satellites, which can potentially enable latency-critical applications to be provided through satellites. NGSO promises a substantial boost in communication speed and energy efficiency, and thus, tackling the main inhibiting factors of commercializing GSO satellites for broader utilization. The promised improvements of NGSO systems have motivated this paper to provide a comprehensive survey of the state-of-the-art NGSO research focusing on the communication prospects, including physical layer and radio access technologies along with the networking aspects and the overall system features and architectures. Beyond this, there are still many NGSO deployment challenges to be addressed to ensure seamless integration not only with GSO systems but also with terrestrial networks. These unprecedented challenges are also discussed in this paper, including coexistence with GSO systems in terms of spectrum access and regulatory issues, satellite constellation and architecture designs, resource management problems, and user equipment requirements. Finally, we outline a set of innovative research directions and new opportunities for future NGSO research
    corecore