355 research outputs found

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the unique technical challenge of supporting a huge number of MTC devices, which is the main focus of this paper. The related challenges include QoS provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT. Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future publication in IEEE Communications Surveys and Tutorial

    Advanced Technologies Enabling Unlicensed Spectrum Utilization in Cellular Networks

    Get PDF
    As the rapid progress and pleasant experience of Internet-based services, there is an increasing demand for high data rate in wireless communications systems. Unlicensed spectrum utilization in Long Term Evolution (LTE) networks is a promising technique to meet the massive traffic demand. There are two effective methods to use unlicensed bands for delivering LTE traffic. One is offloading LTE traffic toWi-Fi. An alternative method is LTE-unlicensed (LTE-U), which aims to directly use LTE protocols and infrastructures over the unlicensed spectrum. It has also been pointed out that addressing the above two methods simultaneously could further improve the system performance. However, how to avoid severe performance degradation of the Wi-Fi network is a challenging issue of utilizing unlicensed spectrum in LTE networks. Specifically, first, the inter-system spectrum sharing, or, more specifically, the coexistence of LTE andWi-Fi in the same unlicensed spectrum is the major challenge of implementing LTE-U. Second, to use the LTE and Wi-Fi integration approach, mobile operators have to manage two disparate networks in licensed and unlicensed spectrum. Third, optimization for joint data offloading to Wi-Fi and LTE-U in multi- cell scenarios poses more challenges because inter-cell interference must be addressed. This thesis focuses on solving problems related to these challenges. First, the effect of bursty traffic in an LTE and Wi-Fi aggregation (LWA)-enabled network has been investigated. To enhance resource efficiency, the Wi-Fi access point (AP) is designed to operate in both the native mode and the LWA mode simultaneously. Specifically, the LWA-modeWi-Fi AP cooperates with the LTE base station (BS) to transmit bearers to the LWA user, which aggregates packets from both LTE and Wi-Fi. The native-mode Wi-Fi AP transmits Wi-Fi packets to those native Wi-Fi users that are not with LWA capability. This thesis proposes a priority-based Wi-Fi transmission scheme with congestion control and studied the throughput of the native Wi-Fi network, as well as the LWA user delay when the native Wi-Fi user is under heavy traffic conditions. The results provide fundamental insights in the throughput and delay behavior of the considered network. Second, the above work has been extended to larger topologies. A stochastic geometry model has been used to model and analyze the performance of an MPTCP Proxy-based LWA network with intra-tier and cross-tier dependence. Under the considered network model and the activation conditions of LWA-mode Wi-Fi, this thesis has obtained three approximations for the density of active LWA-mode Wi-Fi APs through different approaches. Tractable analysis is provided for the downlink (DL) performance evaluation of large-scale LWA networks. The impact of different parameters on the network performance have been analyzed, validating the significant gain of using LWA in terms of boosted data rate and improved spectrum reuse. Third, this thesis also takes a significant step of analyzing joint multi-cell LTE-U and Wi-Fi network, while taking into account different LTE-U and Wi-Fi inter-working schemes. In particular, two technologies enabling data offloading from LTE to Wi-Fi are considered, including LWA and Wi-Fi offloading in the context of the power gain-based user offloading scheme. The LTE cells in this work are subject to load-coupling due to inter-cell interference. New system frameworks for maximizing the demand scaling factor for all users in both Wi-Fi and multi-cell LTE networks have been proposed. The potential of networks is explored in achieving optimal capacity with arbitrary topologies, accounting for both resource limits and inter-cell interference. Theoretical analyses have been proposed for the proposed optimization problems, resulting in algorithms that achieve global optimality. Numerical results show the algorithms’ effectiveness and benefits of joint use of data offloading and the direct use of LTE over the unlicensed band. All the derived results in this thesis have been validated by Monte Carlo simulations in Matlab, and the conclusions observed from the results can provide guidelines for the future unlicensed spectrum utilization in LTE networks

    Fundamental Tradeoffs among Reliability, Latency and Throughput in Cellular Networks

    Get PDF

    Long Term Evolution-Advanced and Future Machine-to-Machine Communication

    Get PDF
    Long Term Evolution (LTE) has adopted Orthogonal Frequency Division Multiple Access (OFDMA) and Single Carrier Frequency Division Multiple Access (SC-FDMA) as the downlink and uplink transmission schemes respectively. Quality of Service (QoS) provisioning is one of the primary objectives of wireless network operators. In LTE-Advanced (LTE-A), several additional new features such as Carrier Aggregation (CA) and Relay Nodes (RNs) have been introduced by the 3rd Generation Partnership Project (3GPP). These features have been designed to deal with the ever increasing demands for higher data rates and spectral efficiency. The RN is a low power and low cost device designed for extending the coverage and enhancing spectral efficiency, especially at the cell edge. Wireless networks are facing a new challenge emerging on the horizon, the expected surge of the Machine-to-Machine (M2M) traffic in cellular and mobile networks. The costs and sizes of the M2M devices with integrated sensors, network interfaces and enhanced power capabilities have decreased significantly in recent years. Therefore, it is anticipated that M2M devices might outnumber conventional mobile devices in the near future. 3GPP standards like LTE-A have primarily been developed for broadband data services with mobility support. However, M2M applications are mostly based on narrowband traffic. These standards may not achieve overall spectrum and cost efficiency if they are utilized for serving the M2M applications. The main goal of this thesis is to take the advantage of the low cost, low power and small size of RNs for integrating M2M traffic into LTE-A networks. A new RN design is presented for aggregating and multiplexing M2M traffic at the RN before transmission over the air interface (Un interface) to the base station called eNodeB. The data packets of the M2M devices are sent to the RN over the Uu interface. Packets from different devices are aggregated at the Packet Data Convergence Protocol (PDCP) layer of the Donor eNodeB (DeNB) into a single large IP packet instead of several small IP packets. Therefore, the amount of overhead data can be significantly reduced. The proposed concept has been developed in the LTE-A network simulator to illustrate the benefits and advantages of the M2M traffic aggregation and multiplexing at the RN. The potential gains of RNs such as coverage enhancement, multiplexing gain, end-to-end delay performance etc. are illustrated with help of simulation results. The results indicate that the proposed concept improves the performance of the LTE-A network with M2M traffic. The adverse impact of M2M traffic on regular LTE-A traffic such as voice and file transfer is minimized. Furthermore, the cell edge throughput and QoS performance are enhanced. Moreover, the results are validated with the help of an analytical model

    Statistical priority-based uplink scheduling for M2M communications

    Get PDF
    Currently, the worldwide network is witnessing major efforts to transform it from being the Internet of humans only to becoming the Internet of Things (IoT). It is expected that Machine Type Communication Devices (MTCDs) will overwhelm the cellular networks with huge traffic of data that they collect from their environments to be sent to other remote MTCDs for processing thus forming what is known as Machine-to-Machine (M2M) communications. Long Term Evolution (LTE) and LTE-Advanced (LTE-A) appear as the best technology to support M2M communications due to their native IP support. LTE can provide high capacity, flexible radio resource allocation and scalability, which are the required pillars for supporting the expected large numbers of deployed MTCDs. Supporting M2M communications over LTE faces many challenges. These challenges include medium access control and the allocation of radio resources among MTCDs. The problem of radio resources allocation, or scheduling, originates from the nature of M2M traffic. This traffic consists of a large number of small data packets, with specific deadlines, generated by a potentially massive number of MTCDs. M2M traffic is therefore mostly in the uplink direction, i.e. from MTCDs to the base station (known as eNB in LTE terminology). These characteristics impose some design requirements on M2M scheduling techniques such as the need to use insufficient radio resources to transmit a huge amount of traffic within certain deadlines. This presents the main motivation behind this thesis work. In this thesis, we introduce a novel M2M scheduling scheme that utilizes what we term the “statistical priority” in determining the importance of information carried by data packets. Statistical priority is calculated based on the statistical features of the data such as value similarity, trend similarity and auto-correlation. These calculations are made and then reported by the MTCDs to the serving eNBs along with other reports such as channel state. Statistical priority is then used to assign priorities to data packets so that the scarce radio resources are allocated to the MTCDs that are sending statistically important information. This would help avoid exploiting limited radio resources to carry redundant or repetitive data which is a common situation in M2M communications. In order to validate our technique, we perform a simulation-based comparison among the main scheduling techniques and our proposed statistical priority-based scheduling technique. This comparison was conducted in a network that includes different types of MTCDs, such as environmental monitoring sensors, surveillance cameras and alarms. The results show that our proposed statistical priority-based scheduler outperforms the other schedulers in terms of having the least losses of alarm data packets and the highest rate in sending critical data packets that carry non-redundant information for both environmental monitoring and video traffic. This indicates that the proposed technique is the most efficient in the utilization of limited radio resources as compared to the other techniques

    Random Access Analysis for Massive IoT Networks Under a New Spatio-Temporal Model: A Stochastic Geometry Approach

    Get PDF
    Massive Internet of Things (mIoT) has provided an auspicious opportunity to build powerful and ubiquitous connections that faces a plethora of new challenges, where cellular networks are potential solutions due to their high scalability, reliability, and efficiency. The Random Access CHannel (RACH) procedure is the first step of connection establishment between IoT devices and Base Stations (BSs) in the cellular-based mIoT network, where modelling the interactions between static properties of physical layer network and dynamic properties of queue evolving in each IoT device are challenging. To tackle this, we provide a novel traffic-aware spatio-temporal model to analyze RACH in cellular-based mIoT networks, where the physical layer network is modelled and analyzed based on stochastic geometry in the spatial domain, and the queue evolution is analyzed based on probability theory in the time domain. For performance evaluation, we derive the exact expressions for the preamble transmission success probabilities of a randomly chosen IoT device with different RACH schemes in each time slot, which offer insights into effectiveness of each RACH scheme. Our derived analytical results are verified by the realistic simulations capturing the evolution of packets in each IoT device. This mathematical model and analytical framework can be applied to evaluate the performance of other types of RACH schemes in the cellular-based networks by simply integrating its preamble transmission principle

    Energy Efficient and Cooperative Solutions for Next-Generation Wireless Networks

    Get PDF
    Energy efficiency is increasingly important for next-generation wireless systems due to the limited battery resources of mobile clients. While fourth generation cellular standards emphasize low client battery consumption, existing techniques do not explicitly focus on reducing power that is consumed when a client is actively communicating with the network. Based on high data rate demands of modern multimedia applications, active mode power consumption is expected to become a critical consideration for the development and deployment of future wireless technologies. Another reason for focusing more attention on energy efficient studies is given by the relatively slow progress in battery technology and the growing quality of service requirements of multimedia applications. The disproportion between demanded and available battery capacity is becoming especially significant for small-scale mobile client devices, where wireless power consumption dominates within the total device power budget. To compensate for this growing gap, aggressive improvements in all aspects of wireless system design are necessary. Recent work in this area indicates that joint link adaptation and resource allocation techniques optimizing energy efficient metrics can provide a considerable gain in client power consumption. Consequently, it is crucial to adapt state-of-the-art energy efficient approaches for practical use, as well as to illustrate the pros and cons associated with applying power-bandwidth optimization to improve client energy efficiency and develop insights for future research in this area. This constitutes the first objective of the present research. Together with energy efficiency, next-generation cellular technologies are emphasizing stronger support for heterogeneous multimedia applications. Since the integration of diverse services within a single radio platform is expected to result in higher operator profits and, at the same time, reduce network management expenses, intensive research efforts have been invested into design principles of such networks. However, as wireless resources are limited and shared by clients, service integration may become challenging. A key element in such systems is the packet scheduler, which typically helps ensure that the individual quality of service requirements of wireless clients are satisfied. In contrastingly different distributed wireless environments, random multiple access protocols are beginning to provide mechanisms for statistical quality of service assurance. However, there is currently a lack of comprehensive analytical frameworks which allow reliable control of the quality of service parameters for both cellular and local area networks. Providing such frameworks is therefore the second objective of this thesis. Additionally, the study addresses the simultaneous operation of a cellular and a local area network in spectrally intense metropolitan deployments and solves some related problems. Further improving the performance of battery-driven mobile clients, cooperative communications are sought as a promising and practical concept. In particular, they are capable of mitigating the negative effects of fading in a wireless channel and are thus expected to enhance next-generation cellular networks in terms of client spectral and energy efficiencies. At the cell edges or in areas missing any supportive relaying infrastructure, client-based cooperative techniques are becoming even more important. As such, a mobile client with poor channel quality may take advantage of neighboring clients which would relay data on its behalf. The key idea behind the concept of client relay is to provide flexible and distributed control over cooperative communications by the wireless clients themselves. By contrast to fully centralized control, this is expected to minimize overhead protocol signaling and hence ensure simpler implementation. Compared to infrastructure relay, client relay will also be cheaper to deploy. Developing the novel concept of client relay, proposing simple and feasible cooperation protocols, and analyzing the basic trade-offs behind client relay functionality become the third objective of this research. Envisioning the evolution of cellular technologies beyond their fourth generation, it appears important to study a wireless network capable of supporting machine-to-machine applications. Recent standardization documents cover a plethora of machine-to-machine use cases, as they also outline the respective technical requirements and features according to the application or network environment. As follows from this activity, a smart grid is one of the primary machine-to-machine use cases that involves meters autonomously reporting usage and alarm information to the grid infrastructure to help reduce operational cost, as well as regulate a customer's utility usage. The preliminary analysis of the reference smart grid scenario indicates weak system architecture components. For instance, the large population of machine-to-machine devices may connect nearly simultaneously to the wireless infrastructure and, consequently, suffer from excessive network entry delays. Another concern is the performance of cell-edge machine-to-machine devices with weak wireless links. Therefore, mitigating the above architecture vulnerabilities and improving the performance of future smart grid deployments is the fourth objective of this thesis. Summarizing, this thesis is generally aimed at the improvement of energy efficient properties of mobile devices in next-generation wireless networks. The related research also embraces a novel cooperation technique where clients may assist each other to increase per-client and network-wide performance. Applying the proposed solutions, the operation time of mobile clients without recharging may be increased dramatically. Our approach incorporates both analytical and simulation components to evaluate complex interactions between the studied objectives. It brings important conclusions about energy efficient and cooperative client behaviors, which is crucial for further development of wireless communications technologies

    A Survey of Scheduling in 5G URLLC and Outlook for Emerging 6G Systems

    Get PDF
    Future wireless communication is expected to be a paradigm shift from three basic service requirements of 5th Generation (5G) including enhanced Mobile Broadband (eMBB), Ultra Reliable and Low Latency communication (URLLC) and the massive Machine Type Communication (mMTC). Integration of the three heterogeneous services into a single system is a challenging task. The integration includes several design issues including scheduling network resources with various services. Specially, scheduling the URLLC packets with eMBB and mMTC packets need more attention as it is a promising service of 5G and beyond systems. It needs to meet stringent Quality of Service (QoS) requirements and is used in time-critical applications. Thus through understanding of packet scheduling issues in existing system and potential future challenges is necessary. This paper surveys the potential works that addresses the packet scheduling algorithms for 5G and beyond systems in recent years. It provides state of the art review covering three main perspectives such as decentralised, centralised and joint scheduling techniques. The conventional decentralised algorithms are discussed first followed by the centralised algorithms with specific focus on single and multi-connected network perspective. Joint scheduling algorithms are also discussed in details. In order to provide an in-depth understanding of the key scheduling approaches, the performances of some prominent scheduling algorithms are evaluated and analysed. This paper also provides an insight into the potential challenges and future research directions from the scheduling perspective
    corecore