138 research outputs found

    Massive Non-Orthogonal Multiple Access for Cellular IoT: Potentials and Limitations

    Full text link
    The Internet of Things (IoT) promises ubiquitous connectivity of everything everywhere, which represents the biggest technology trend in the years to come. It is expected that by 2020 over 25 billion devices will be connected to cellular networks; far beyond the number of devices in current wireless networks. Machine-to-Machine (M2M) communications aims at providing the communication infrastructure for enabling IoT by facilitating the billions of multi-role devices to communicate with each other and with the underlying data transport infrastructure without, or with little, human intervention. Providing this infrastructure will require a dramatic shift from the current protocols mostly designed for human-to-human (H2H) applications. This article reviews recent 3GPP solutions for enabling massive cellular IoT and investigates the random access strategies for M2M communications, which shows that cellular networks must evolve to handle the new ways in which devices will connect and communicate with the system. A massive non-orthogonal multiple access (NOMA) technique is then presented as a promising solution to support a massive number of IoT devices in cellular networks, where we also identify its practical challenges and future research directions.Comment: To appear in IEEE Communications Magazin

    Statistical priority-based uplink scheduling for M2M communications

    Get PDF
    Currently, the worldwide network is witnessing major efforts to transform it from being the Internet of humans only to becoming the Internet of Things (IoT). It is expected that Machine Type Communication Devices (MTCDs) will overwhelm the cellular networks with huge traffic of data that they collect from their environments to be sent to other remote MTCDs for processing thus forming what is known as Machine-to-Machine (M2M) communications. Long Term Evolution (LTE) and LTE-Advanced (LTE-A) appear as the best technology to support M2M communications due to their native IP support. LTE can provide high capacity, flexible radio resource allocation and scalability, which are the required pillars for supporting the expected large numbers of deployed MTCDs. Supporting M2M communications over LTE faces many challenges. These challenges include medium access control and the allocation of radio resources among MTCDs. The problem of radio resources allocation, or scheduling, originates from the nature of M2M traffic. This traffic consists of a large number of small data packets, with specific deadlines, generated by a potentially massive number of MTCDs. M2M traffic is therefore mostly in the uplink direction, i.e. from MTCDs to the base station (known as eNB in LTE terminology). These characteristics impose some design requirements on M2M scheduling techniques such as the need to use insufficient radio resources to transmit a huge amount of traffic within certain deadlines. This presents the main motivation behind this thesis work. In this thesis, we introduce a novel M2M scheduling scheme that utilizes what we term the “statistical priority” in determining the importance of information carried by data packets. Statistical priority is calculated based on the statistical features of the data such as value similarity, trend similarity and auto-correlation. These calculations are made and then reported by the MTCDs to the serving eNBs along with other reports such as channel state. Statistical priority is then used to assign priorities to data packets so that the scarce radio resources are allocated to the MTCDs that are sending statistically important information. This would help avoid exploiting limited radio resources to carry redundant or repetitive data which is a common situation in M2M communications. In order to validate our technique, we perform a simulation-based comparison among the main scheduling techniques and our proposed statistical priority-based scheduling technique. This comparison was conducted in a network that includes different types of MTCDs, such as environmental monitoring sensors, surveillance cameras and alarms. The results show that our proposed statistical priority-based scheduler outperforms the other schedulers in terms of having the least losses of alarm data packets and the highest rate in sending critical data packets that carry non-redundant information for both environmental monitoring and video traffic. This indicates that the proposed technique is the most efficient in the utilization of limited radio resources as compared to the other techniques

    Radio Resource Sharing for MTC in LTE-A: An Interference-Aware Bipartite Graph Approach

    Get PDF
    International audienceTraditional cellular networks have been considered the most promising candidates to support machine to machine (M2M) communication mainly due to their ubiquitous coverage. Optimally designed to support human to human (H2H) communication, an innovative access to radio resources is required to accommodate M2M unique features such as the massive number of machine type devices (MTDs) as well as the limited data transmission session. In this paper, we consider a simultaneous access to the spectrum in an M2M/H2H coexistence scenario. Taking the advantage of the new device to device (D2D) communication paradigm enabled in long term evolution-advanced (LTE-A), we propose to combine M2M and D2D owing to the MTD low transmit power and thus enabling efficiently the resource sharing. First, we formulate the resource sharing problem as a maximization of the sum-rate, problem for which the optimal solution has been proved to be non deterministic polynomial time hard (NP-Hard). We next model the problem as a novel interference-aware bipartite graph to overcome the computational complexity of the optimal solution. To solve this problem, we consider here a two-phase resource allocation approach. In the first phase, H2H users resource assignment is performed in a conventional way. In the second phase, we introduce two alternative algorithms, one centralized and one semi-distributed to perform M2M resource allocation. The computational complexity of both introduced algorithms whose aim is to solve the M2M resource allocation, is of polynomial complexity. Simulation results show that the semi-distributed M2M resource allocation algorithm achieves quite good performance in terms of network aggregate sum-rate with markedly lower communication overhead compared to the centralized one

    5GAuRA. D3.3: RAN Analytics Mechanisms and Performance Benchmarking of Video, Time Critical, and Social Applications

    Get PDF
    5GAuRA deliverable D3.3.This is the final deliverable of Work Package 3 (WP3) of the 5GAuRA project, providing a report on the project’s developments on the topics of Radio Access Network (RAN) analytics and application performance benchmarking. The focus of this deliverable is to extend and deepen the methods and results provided in the 5GAuRA deliverable D3.2 in the context of specific use scenarios of video, time critical, and social applications. In this respect, four major topics of WP3 of 5GAuRA – namely edge-cloud enhanced RAN architecture, machine learning assisted Random Access Channel (RACH) approach, Multi-access Edge Computing (MEC) content caching, and active queue management – are put forward. Specifically, this document provides a detailed discussion on the service level agreement between tenant and service provider in the context of network slicing in Fifth Generation (5G) communication networks. Network slicing is considered as a key enabler to 5G communication system. Legacy telecommunication networks have been providing various services to all kinds of customers through a single network infrastructure. In contrast, by deploying network slicing, operators are now able to partition one network into individual slices, each with its own configuration and Quality of Service (QoS) requirements. There are many applications across industry that open new business opportunities with new business models. Every application instance requires an independent slice with its own network functions and features, whereby every single slice needs an individual Service Level Agreement (SLA). In D3.3, we propose a comprehensive end-to-end structure of SLA between the tenant and the service provider of sliced 5G network, which balances the interests of both sides. The proposed SLA defines reliability, availability, and performance of delivered telecommunication services in order to ensure that right information is delivered to the right destination at right time, safely and securely. We also discuss the metrics of slicebased network SLA such as throughput, penalty, cost, revenue, profit, and QoS related metrics, which are, in the view of 5GAuRA, critical features of the agreement.Peer ReviewedPostprint (published version

    LTE network slicing and resource trading schemes for machine-to-machine communications

    Get PDF
    The Internet of Things (IoT) is envisioned as the future of human-free communications. IoT relies on Machine-to-Machine (M2M) communications rather than conventional Human-to-Human (H2H) communications. It is expected that billions of Machine Type Communication Devices (MTCDs) will be connected to the Internet in the near future. Consequently, the mobile data traffic is poised to increase dramatically. Long Term Evolution (LTE) and its subsequent technology LTE-Advanced (LTE-A) are the candidate carriers of M2M communications for the IoT purposes. Despite the significant increase of traffic due to IoT, the Mobile Network Operators (MNOs) revenues are not increasing at the same pace. Hence, many MNOs have resorted to sharing their radio resources and parts of their infrastructures, in what is known as Network Virtualization (NV). In the thesis, we focus on slicing in which an operator known as Mobile Virtual Network Operator (MVNO), does not own a spectrum license or mobile infrastructure, and relies on a larger MNO to serve its users. The large licensed MNO divides its spectrum pool into slices. Each MVNO reserves one or more slice(s). There are 2 forms of slice scheduling: Resource-based in which the slices are assigned a portion of radio resources or Data rate-based in which the slices are assigned a certain bandwidth. In the first part of this thesis we present different approaches for adapting resource-based NV, Data rate-based NV to Machine Type Communication (MTC). This will be done in such a way that resources are allocated to each slice depending on the delay budget of the MTCDs deployed in the slice and their payloads. The adapted NV schemes are then simulated and compared to the Static Reservation (SR) of radio resources. They have all shown an improved performance over SR from deadline missing perspective. In the second part of the thesis, we introduce a novel resource trading scheme that allows sharing operators to trade their radio resources based on the varying needs of their clients with time. The Genetic Algorithm (GA) is used to optimize the resource trading among the virtual operators. The proposed trading scheme is simulated and compared to the adapted schemes from the first part of the thesis. The novel trading scheme has shown to achieve significantly better performance compared to the adapted schemes

    Probabilistic Rateless Multiple Access for Machine-to-Machine Communication

    Get PDF
    Future machine to machine (M2M) communications need to support a massive number of devices communicating with each other with little or no human intervention. Random access techniques were originally proposed to enable M2M multiple access, but suffer from severe congestion and access delay in an M2M system with a large number of devices. In this paper, we propose a novel multiple access scheme for M2M communications based on the capacity-approaching analog fountain code to efficiently minimize the access delay and satisfy the delay requirement for each device. This is achieved by allowing M2M devices to transmit at the same time on the same channel in an optimal probabilistic manner based on their individual delay requirements. Simulation results show that the proposed scheme achieves a near optimal rate performance and at the same time guarantees the delay requirements of the devices. We further propose a simple random access strategy and characterized the required overhead. Simulation results show the proposed approach significantly outperforms the existing random access schemes currently used in long term evolution advanced (LTE-A) standard in terms of the access delay.Comment: Accepted to Publish in IEEE Transactions on Wireless Communication

    Joint delay and energy aware dragonfly optimization-based uplink resource allocation scheme for LTE-A networks in a cross-layer environment

    Get PDF
    The exponential growth in data traffic from smart devices has led to a need for highly capable wireless networks with faster data transmission rates and improved spectral efficiency. Allocating resources efficiently in a 5G communication system with a huge number of machine type communication (MTC) devices is essential to ensure optimal performance and meet the diverse requirements of different applications. The LTE-A network offers high-speed mobile data services and caters to MTC devices and has relatively low data service requirements compared to human-to-human (H2H) communications. LTE-A networks require advanced scheduling schemes to manage the limited spectrum and ensure efficient transmissions. This necessitates effective resource allocation schemes to minimize interference between cells in future networks. To address this issue, a joint delay and energy aware Levy flight Brownian movement-based dragonfly optimization (DELFBDO)-based uplink resource allocation scheme for LTE-A Networks is proposed in this work to optimize energy efficiency, maximize the throughput and reduce the latency. The DELFDO algorithm efficiently organizes packets in both time and frequency domains for H2H and MTC devices, resulting in improved quality of service while minimizing energy consumption. The Simulation results demonstrate that the proposed method increases the energy efficiency by producing the appropriate channel and power assignment for UEs and MTC devices.© 2024 The Authors. The Journal of Engineering published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.fi=vertaisarvioitu|en=peerReviewed

    Prioritised Random Access Channel Protocols for Delay Critical M2M Communication over Cellular Networks

    Get PDF
    With the ever-increasing technological evolution, the current and future generation communication systems are geared towards accommodating Machine to Machine (M2M) communication as a necessary prerequisite for Internet of Things (IoT). Machine Type Communication (MTC) can sustain many promising applications through connecting a huge number of devices into one network. As current studies indicate, the number of devices is escalating at a high rate. Consequently, the network becomes congested because of its lower capacity, when the massive number of devices attempts simultaneous connection through the Random Access Channel (RACH). This results in RACH resource shortage, which can lead to high collision probability and massive access delay. Hence, it is critical to upgrade conventional Random Access (RA) techniques to support a massive number of Machine Type Communication (MTC) devices including Delay-Critical (DC) MTC. This thesis approaches to tackle this problem by modeling and optimising the access throughput and access delay performance of massive random access of M2M communications in Long-Term Evolution (LTE) networks. This thesis investigates the performance of different random access schemes in different scenarios. The study begins with the design and inspection of a group based 2-step Slotted-Aloha RACH (SA-RACH) scheme considering the coexistence of Human-to-Human (H2H) and M2M communication, the latter of which is categorised as: Delay-Critical user equipments (DC-UEs) and Non-Delay-Critical user equipments (NDC-UEs). Next, a novel RACH scheme termed the Priority-based Dynamic RACH (PD-RACH) model is proposed which utilises a coded preamble based collision probability model. Finally, being a key enabler of IoT, Machine Learning, i.e. a Q-learning based approach has been adopted, and a learning assisted Prioritised RACH scheme has been developed and investigated to prioritise a specific user group. In this work, the performance analysis of these novel RACH schemes show promising results compared to that of conventional RACH
    corecore