2,617 research outputs found

    Semi-persistent RRC protocol for machine-type communication devices in LTE networks

    Get PDF
    In this paper, we investigate the design of a radio resource control (RRC) protocol in the framework of long-term evolution (LTE) of the 3rd Generation Partnership Project regarding provision of low cost/complexity and low energy consumption machine-type communication (MTC), which is an enabling technology for the emerging paradigm of the Internet of Things. Due to the nature and envisaged battery-operated long-life operation of MTC devices without human intervention, energy efficiency becomes extremely important. This paper elaborates the state-of-the-art approaches toward addressing the challenge in relation to the low energy consumption operation of MTC devices, and proposes a novel RRC protocol design, namely, semi-persistent RRC state transition (SPRST), where the RRC state transition is no longer triggered by incoming traffic but depends on pre-determined parameters based on the traffic pattern obtained by exploiting the network memory. The proposed RRC protocol can easily co-exist with the legacy RRC protocol in the LTE. The design criterion of SPRST is derived and the signalling procedure is investigated accordingly. Based upon the simulation results, it is shown that the SPRST significantly reduces both the energy consumption and the signalling overhead while at the same time guarantees the quality of service requirements

    Predictive and core-network efficient RRC signalling for active state handover in RANs with control/data separation

    Get PDF
    Frequent handovers (HOs) in dense small cell deployment scenarios could lead to a dramatic increase in signalling overhead. This suggests a paradigm shift towards a signalling conscious cellular architecture with intelligent mobility management. In this direction, a futuristic radio access network with a logical separation between control and data planes has been proposed in research community. It aims to overcome limitations of the conventional architecture by providing high data rate services under the umbrella of a coverage layer in a dual connection mode. This approach enables signalling efficient HO procedures, since the control plane remains unchanged when the users move within the footprint of the same umbrella. Considering this configuration, we propose a core-network efficient radio resource control (RRC) signalling scheme for active state HO and develop an analytical framework to evaluate its signalling load as a function of network density, user mobility and session characteristics. In addition, we propose an intelligent HO prediction scheme with advance resource preparation in order to minimise the HO signalling latency. Numerical and simulation results show promising gains in terms of reduction in HO latency and signalling load as compared with conventional approaches

    End-to-End Simulation of 5G mmWave Networks

    Full text link
    Due to its potential for multi-gigabit and low latency wireless links, millimeter wave (mmWave) technology is expected to play a central role in 5th generation cellular systems. While there has been considerable progress in understanding the mmWave physical layer, innovations will be required at all layers of the protocol stack, in both the access and the core network. Discrete-event network simulation is essential for end-to-end, cross-layer research and development. This paper provides a tutorial on a recently developed full-stack mmWave module integrated into the widely used open-source ns--3 simulator. The module includes a number of detailed statistical channel models as well as the ability to incorporate real measurements or ray-tracing data. The Physical (PHY) and Medium Access Control (MAC) layers are modular and highly customizable, making it easy to integrate algorithms or compare Orthogonal Frequency Division Multiplexing (OFDM) numerologies, for example. The module is interfaced with the core network of the ns--3 Long Term Evolution (LTE) module for full-stack simulations of end-to-end connectivity, and advanced architectural features, such as dual-connectivity, are also available. To facilitate the understanding of the module, and verify its correct functioning, we provide several examples that show the performance of the custom mmWave stack as well as custom congestion control algorithms designed specifically for efficient utilization of the mmWave channel.Comment: 25 pages, 16 figures, submitted to IEEE Communications Surveys and Tutorials (revised Jan. 2018

    Memory-full context-aware predictive mobility management in dual connectivity 5G networks

    Get PDF
    Network densification with small cell deployment is being considered as one of the dominant themes in the fifth generation (5G) cellular system. Despite the capacity gains, such deployment scenarios raise several challenges from mobility management perspective. The small cell size, which implies a small cell residence time, will increase the handover (HO) rate dramatically. Consequently, the HO latency will become a critical consideration in the 5G era. The latter requires an intelligent, fast and light-weight HO procedure with minimal signalling overhead. In this direction, we propose a memory-full context-aware HO scheme with mobility prediction to achieve the aforementioned objectives. We consider a dual connectivity radio access network architecture with logical separation between control and data planes because it offers relaxed constraints in implementing the predictive approaches. The proposed scheme predicts future HO events along with the expected HO time by combining radio frequency performance to physical proximity along with the user context in terms of speed, direction and HO history. To minimise the processing and the storage requirements whilst improving the prediction performance, a user-specific prediction triggering threshold is proposed. The prediction outcome is utilised to perform advance HO signalling whilst suspending the periodic transmission of measurement reports. Analytical and simulation results show that the proposed scheme provides promising gains over the conventional approach

    Will SDN be part of 5G?

    Get PDF
    For many, this is no longer a valid question and the case is considered settled with SDN/NFV (Software Defined Networking/Network Function Virtualization) providing the inevitable innovation enablers solving many outstanding management issues regarding 5G. However, given the monumental task of softwarization of radio access network (RAN) while 5G is just around the corner and some companies have started unveiling their 5G equipment already, the concern is very realistic that we may only see some point solutions involving SDN technology instead of a fully SDN-enabled RAN. This survey paper identifies all important obstacles in the way and looks at the state of the art of the relevant solutions. This survey is different from the previous surveys on SDN-based RAN as it focuses on the salient problems and discusses solutions proposed within and outside SDN literature. Our main focus is on fronthaul, backward compatibility, supposedly disruptive nature of SDN deployment, business cases and monetization of SDN related upgrades, latency of general purpose processors (GPP), and additional security vulnerabilities, softwarization brings along to the RAN. We have also provided a summary of the architectural developments in SDN-based RAN landscape as not all work can be covered under the focused issues. This paper provides a comprehensive survey on the state of the art of SDN-based RAN and clearly points out the gaps in the technology.Comment: 33 pages, 10 figure

    A resource management scheme for multi-user GFDM with adaptive modulation in frequency selective fading channels

    Get PDF
    The topic is "Low-latency communication for machine-type communication in LTE-A" and need to be specified in more detail.This final project focus on designing and evaluating a resource management scheme for a multi-user generalized frequency division multiplexing (GFDM) system, when a frequency selective fading channel and adaptive modulation is used. GFDM with adaptive subcarrier, sub-symbol and power allocation are considered. Assuming that the transmitter has a perfect knowledge of the instantaneous channel gains for all users, I propose a multi-user GFDM subcarrier, sub-symbol and power allocation algorithm to minimize the total transmit power. This work analyzes the performance of using a specific set of parameters for aligning GFDM with long term evolution (LTE) grid. The results show that the performance of the proposed algorithm using GFDM is closer to the performance of using OFDM and outperforms multiuser GFDM systems with static frequency division multiple access (FDMA) techniques which employ fixed subcarrier allocation schemes. The advantage between GFDM and OFDM is that the latency of the system can be reduced by a factor of 15 if independent demodulation is considered.El objetivo de este proyecto final es el de diseñar y evaluar un esquema para administrar los recursos de un sistema multi-usuario donde se utiliza generalized frequency division multiplexing (GFDM), cuando el canal es de frequencia de desvanecimiento selectivo y se utiliza modulación adaptiva. Consideramos un sistema GFDM con subportadora, sub-símbolo i asignación de potencia adaptiva. Asumiendo que el transmisor conoce perfectamente el estado del canal para todos los usuarios, propongo un algoritmo que asigna los recursos de forma que la potencia total de transmisión es mínima. Este trabajo analiza la eficiencia de utilizar un grupo de parámetros concretos para alinear el sistema GFDM con el sistema de LTE. Los resultados muestran que el comportamiento del algoritmo en GFDM es muy similar al de OFDM, pero mucho mayor que cuando se compara con sistemas de asignación de recursos estáticos.L’objectiu d’aquest projecte final es dissenyar i avaluar un esquema per administrar els recursos per a un sistema multi-usuari fent servir generalized frequency division multiplexing (GFDM), quan el canal es de freqüència esvaniment selectiu i es fa servir modulació adaptativa. Considerem un sistema GFDM amb subportadora, sub-símbol i assignació de potencia adaptativa. Assumint que el transmissor coneix perfectament l’estat del canal per tots els usuaris, proposo un algoritme que assigna els recursos de forma que la potencia total de transmissió es la mínima. Aquest treball analitza l’eficiència de fer servir un grup de paràmetres concrets per tal d’alinear el sistema GFDM amb el sistema de LTE. Els resultats mostren que el comportament de l’algoritme en GFDM es molt similar al de OFDM i que millora bastant els resultats quan el comparem amb sistemes d’assignament de recursos estàtics

    Optimized LTE Data Transmission Procedures for IoT: Device Side Energy Consumption Analysis

    Full text link
    The efficient deployment of Internet of Things (IoT) over cellular networks, such as Long Term Evolution (LTE) or the next generation 5G, entails several challenges. For massive IoT, reducing the energy consumption on the device side becomes essential. One of the main characteristics of massive IoT is small data transmissions. To improve the support of them, the 3GPP has included two novel optimizations in LTE: one of them based on the Control Plane (CP), and the other on the User Plane (UP). In this paper, we analyze the average energy consumption per data packet using these two optimizations compared to conventional LTE Service Request procedure. We propose an analytical model to calculate the energy consumption for each procedure based on a Markov chain. In the considered scenario, for large and small Inter-Arrival Times (IATs), the results of the three procedures are similar. While for medium IATs CP reduces the energy consumption per packet up to 87% due to its connection release optimization

    A survey of machine learning techniques applied to self organizing cellular networks

    Get PDF
    In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future
    • …
    corecore