390 research outputs found

    Optimized LTE Data Transmission Procedures for IoT: Device Side Energy Consumption Analysis

    Full text link
    The efficient deployment of Internet of Things (IoT) over cellular networks, such as Long Term Evolution (LTE) or the next generation 5G, entails several challenges. For massive IoT, reducing the energy consumption on the device side becomes essential. One of the main characteristics of massive IoT is small data transmissions. To improve the support of them, the 3GPP has included two novel optimizations in LTE: one of them based on the Control Plane (CP), and the other on the User Plane (UP). In this paper, we analyze the average energy consumption per data packet using these two optimizations compared to conventional LTE Service Request procedure. We propose an analytical model to calculate the energy consumption for each procedure based on a Markov chain. In the considered scenario, for large and small Inter-Arrival Times (IATs), the results of the three procedures are similar. While for medium IATs CP reduces the energy consumption per packet up to 87% due to its connection release optimization

    A Tractable Model of the LTE Access Reservation Procedure for Machine-Type Communications

    Get PDF
    A canonical scenario in Machine-Type Communications (MTC) is the one featuring a large number of devices, each of them with sporadic traffic. Hence, the number of served devices in a single LTE cell is not determined by the available aggregate rate, but rather by the limitations of the LTE access reservation protocol. Specifically, the limited number of contention preambles and the limited amount of uplink grants per random access response are crucial to consider when dimensioning LTE networks for MTC. We propose a low-complexity model of LTE's access reservation protocol that encompasses these two limitations and allows us to evaluate the outage probability at click-speed. The model is based chiefly on closed-form expressions, except for the part with the feedback impact of retransmissions, which is determined by solving a fixed point equation. Our model overcomes the incompleteness of the existing models that are focusing solely on the preamble collisions. A comparison with the simulated LTE access reservation procedure that follows the 3GPP specifications, confirms that our model provides an accurate estimation of the system outage event and the number of supported MTC devices.Comment: Submitted, Revised, to be presented in IEEE Globecom 2015; v3: fixed error in eq. (4

    Achieving Ultra-Low Latency in 5G Millimeter Wave Cellular Networks

    Full text link
    The IMT 2020 requirements of 20 Gbps peak data rate and 1 millisecond latency present significant engineering challenges for the design of 5G cellular systems. Use of the millimeter wave (mmWave) bands above 10 GHz --- where vast quantities of spectrum are available --- is a promising 5G candidate that may be able to rise to the occasion. However, while the mmWave bands can support massive peak data rates, delivering these data rates on end-to-end service while maintaining reliability and ultra-low latency performance will require rethinking all layers of the protocol stack. This papers surveys some of the challenges and possible solutions for delivering end-to-end, reliable, ultra-low latency services in mmWave cellular systems in terms of the Medium Access Control (MAC) layer, congestion control and core network architecture

    Design and analysis of LTE and wi-fi schemes for communications of massive machine devices

    Get PDF
    Existing communication technologies are designed with speciÿc use cases in mind, however, ex-tending these use cases usually throw up interesting challenges. For example, extending the use of existing cellular networks to emerging applications such as Internet of Things (IoT) devices throws up the challenge of handling massive number of devices. In this thesis, we are motivated to investigate existing schemes used in LTE and Wi-Fi for supporting massive machine devices and improve on observed performance gaps by designing new ones that outperform the former. This thesis investigates the existing random access protocol in LTE and proposes three schemes to combat massive device access challenge. The ÿrst is a root index reuse and allocation scheme which uses link budget calculations in extracting a safe distance for preamble reuse under vari-able cell size and also proposes an index allocation algorithm. Secondly, a dynamic subframe optimization scheme that combats the challenge from an optimisation solution perspective. Thirdly, the use of small cells for random access. Simulation and numerical analysis shows performance improvements against existing schemes in terms of throughput, access delay and probability of collision. In some cases, over 20% increase in performance was observed. The proposed schemes provide quicker and more guaranteed opportunities for machine devices to communicate. Also, in Wi-Fi networks, adaptation of the transmission rates to the dynamic channel condi-tions is a major challenge. Two algorithms were proposed to combat this. The ÿrst makes use of contextual information to determine the network state and respond appropriately whilst the second samples candidate transmission modes and uses the e˛ective throughput to make a deci-sion. The proposed algorithms were compared to several existing rate adaptation algorithms by simulations and under various system and channel conÿgurations. They show signiÿcant per-formance improvements, in terms of throughput, thus, conÿrming their suitability for dynamic channel conditions

    Software Defined Radio for NB-IoT

    Get PDF
    The next generation of mobile radio systems is expected to providing wireless connectivity for a wide range of new applications and services involving not only people but also machines and objects. Within few years, billions of low-cost and low-complexity devices and sensors will be connected to the Internet, forming a converged ecosystem called Internet of Things (IoT). As a result, in 2016, 3GPP standardizes NB-IoT, the new narrowband radio technology developed for the IoT market. Massive connectivity, reduced UE complexity, coverage extension and deployment flexibility are the targets for this new radio interface, which also ensures harmonious coexistence with current GSM, GPRS and LTE systems. In parallel, the rise of open-source software combined with Software Defined Radio (SDR) solutions has completely changed radio systems engineering in the late years. This thesis focuses on developing the NB-IoT’s protocol stack on the EURECOM’s open-source software platform OpenAirInterface (OAI). First part of this work aims to implement NB-IoT’s Radio Resource Control functionalities on OAI. After an introduction to the platform architecture, a new RRC layer code structure and related interfaces are defined, along with a new approach for Signalling Radio Bearers management. A deep analysis on System Information scheduling is conducted and a subframe-based transmission scheme is then proposed. The last part of this thesis addresses the implementation of a multi-vendor platform interface based on Small Cell Forum’s Functional Application Platform Interface (FAPI) standard. A configurable and dynamically loadable Interface Module (IF-Module) is designed between OAI’s MAC and PHY layers. Primitives and related code structures are presented as well as corresponding Data and Configuration’s procedures. Finally, the convergence of both NB-IoT and FAPI requirements lead to re-design PHY layer mechanisms for which a downlink transmission scheme is proposed

    Probabilistic Rateless Multiple Access for Machine-to-Machine Communication

    Get PDF
    Future machine to machine (M2M) communications need to support a massive number of devices communicating with each other with little or no human intervention. Random access techniques were originally proposed to enable M2M multiple access, but suffer from severe congestion and access delay in an M2M system with a large number of devices. In this paper, we propose a novel multiple access scheme for M2M communications based on the capacity-approaching analog fountain code to efficiently minimize the access delay and satisfy the delay requirement for each device. This is achieved by allowing M2M devices to transmit at the same time on the same channel in an optimal probabilistic manner based on their individual delay requirements. Simulation results show that the proposed scheme achieves a near optimal rate performance and at the same time guarantees the delay requirements of the devices. We further propose a simple random access strategy and characterized the required overhead. Simulation results show the proposed approach significantly outperforms the existing random access schemes currently used in long term evolution advanced (LTE-A) standard in terms of the access delay.Comment: Accepted to Publish in IEEE Transactions on Wireless Communication

    Success Probability of Multiple-Preamble Based Single-Attempt Random Access to Mobile Networks

    Get PDF
    In this letter, we analyse the trade-off between collision probability and code-ambiguity, when devices transmit a sequence of preambles as a codeword, instead of a single preamble, to reduce collision probability during random access to a mobile network. We point out that the network may not have sufficient resources to allocate to every possible codeword, and if it does, then this results in low utilisation of allocated uplink resources. We derive the optimal preamble set size that maximises the probability of success in a single attempt, for a given number of devices and uplink resources
    corecore