23 research outputs found

    Radio Resource Management Scheme in NB-IoT Systems

    Get PDF
    Narrowband Internet of Things (NB-IoT) is the prominent technology that fits the requirements of future IoT networks. However, due to the limited spectrum (i.e., 180 kHz) availability for NB-IoT systems, one of the key issues is how to efficiently use these resources to support massive IoT devices? Furthermore, in NB-IoT, to reduce the computation complexity and to provide coverage extension, the concept of time offset and repetition has been introduced. Considering these new features, the existing resource management schemes are no longer applicable. Moreover, the allocation of frequency band for NB-IoT within LTE band, or as a standalone, might not be synchronous in all the cells, resulting in intercell interference (ICI) from the neighboring cells' LTE users or NB-IoT users (synchronous case). In this paper, first a theoretical framework for the upper bound on the achievable data rate is formulated in the presence of control channel and repetition factor. From the conducted analysis, it is shown that the maximum achievable data rates are 89.2 Kbps and 92 Kbps for downlink and uplink, respectively. Second, we propose an interference aware resource allocation for NB-IoT by formulating the rate maximization problem considering the overhead of control channels, time offset, and repetition factor. Due to the complexity of finding the globally optimum solution of the formulated problem, a sub-optimal solution with an iterative algorithm based on cooperative approaches is proposed. The proposed algorithm is then evaluated to investigate the impact of repetition factor, time offset and ICI on the NB-IoT data rate, and energy consumption. Furthermore, a detailed comparison between the non-cooperative, cooperative, and optimal scheme (i.e., no repetition) is also presented. It is shown through the simulation results that the cooperative scheme provides up to 8% rate improvement and 17% energy reduction as compared with the non-cooperative scheme

    Cooperative cognitive network slicing virtualization for smart IoT applications

    Get PDF
    This paper proposes the cooperative cognitive net-work slicing virtualization solution for smart Internet of things (IoT) applications. To this end, we deploy virtualized small base stations (vSBSs) in SDR devices that offer network-slicing virtualization option. The proposed virtualized solution relies on Fed4Fire wireless experimental platform. In particular, we assume that multiple IoT devices can have access to different vSBSs, which coordinate their resources in a cooperative manner using machine learning (ML). To this end, a proactive resource management is deployed in the unlicensed band, where a cooperative solution is facilitated using the licensed band. The cooperative network slicing is managed and orchestrated using small cell virtualization offered by the Fed4Fire. Experimental trials are carried out for certain number of users and results are obtained that highlight the benefit of employing cooperative cognitive network slicing in future virtualized wireless networks

    Radio Resource Management in NB-IoT Systems:Empowered by Interference Prediction and Flexible Duplexing

    Get PDF
    NB-IoT is a promising cellular technology for enabling low cost, low power, long-range connectivity to IoT devices. With the bandwidth requirement of 180 kHz, it provides the flexibility to deploy within the existing LTE band. However, this raises serious concerns about the performance of the technology due to severe interference from multi-tier 5G HetNets. Furthermore, as NB-IoT is based on HD-FDD, the symmetric allocation of spectrum band between the downlink and uplink results in underutilization of resources, particularly in the case of asymmetric traffic distribution. Therefore, an innovative RRM strategy needs to be devised to improve spectrum efficiency and device connectivity. This article presents the detailed design challenges that need to be addressed for the RRM of NB-IoT and proposes a novel framework to devise an efficient resource allocation scheme by exploiting cooperative interference prediction and flexible duplexing techniques

    Interference-Aware Radio Resource Allocation for 5G Ultra-Reliable Low-Latency Communication

    Get PDF
    Ultra-reliable low-latency communication (URLLC) is one of the main challenges faced by future 5G networks to enable mission-critical IoT use-case scenarios. High reliability can be achieved by reducing the requirement of achievable rates, therefore, results in reduced spectral efficiency. Retransmission has been introduced for 5G or beyond, to achieve reliability with improved spectral efficiency at the cost of increased packet latency. Keeping in mind, the trade-off between reliability and latency, in this paper, we have proposed an interference-aware radio resource (IARR) allocation for uplink transmission by formulating a sum-rate maximization problem. The aim of the proposed algorithm is to improve the link quality to achieve high reliability for future 5G networks resulting in reduced retransmissions and packet latency. To reduce the computation complexity of the maximization problem in achieving the globally optimal solution, we propose a progressive interference-aware heuristic solution. The proposed solution is then investigated to evaluate the impact of retransmission and inter-cell interference on the average information rate and latency of the considered multi-cell cellular network. The performance of IARR algorithm is then compared with the conventional round-robin scheduling (RRS). Significant improvement in the link reliability along with the reduction in latency has been observed with IARR algorithm. The results illustrate that the IARR algorithm improves the average rate by 7% and latency by 10% compared to RRS

    Deep Reinforcement Learning for Real-Time Optimization in NB-IoT Networks

    Get PDF
    NarrowBand-Internet of Things (NB-IoT) is an emerging cellular-based technology that offers a range of flexible configurations for massive IoT radio access from groups of devices with heterogeneous requirements. A configuration specifies the amount of radio resource allocated to each group of devices for random access and for data transmission. Assuming no knowledge of the traffic statistics, there exists an important challenge in "how to determine the configuration that maximizes the long-term average number of served IoT devices at each Transmission Time Interval (TTI) in an online fashion". Given the complexity of searching for optimal configuration, we first develop real-time configuration selection based on the tabular Q-learning (tabular-Q), the Linear Approximation based Q-learning (LA-Q), and the Deep Neural Network based Q-learning (DQN) in the single-parameter single-group scenario. Our results show that the proposed reinforcement learning based approaches considerably outperform the conventional heuristic approaches based on load estimation (LE-URC) in terms of the number of served IoT devices. This result also indicates that LA-Q and DQN can be good alternatives for tabular-Q to achieve almost the same performance with much less training time. We further advance LA-Q and DQN via Actions Aggregation (AA-LA-Q and AA-DQN) and via Cooperative Multi-Agent learning (CMA-DQN) for the multi-parameter multi-group scenario, thereby solve the problem that Q-learning agents do not converge in high-dimensional configurations. In this scenario, the superiority of the proposed Q-learning approaches over the conventional LE-URC approach significantly improves with the increase of configuration dimensions, and the CMA-DQN approach outperforms the other approaches in both throughput and training efficiency

    A simplified optimization for resource management in cognitive radio network-based internet-of-things over 5G networks

    Get PDF
    With increasing evolution of applications and services in internet-of-things (IoT), there is an increasing concern of offering superior quality of service to its ever-increasing user base. This demand can be fulfilled by harnessing the potential of cognitive radio network (CRN) where better accessibility of services and resources can be achieved. However, existing review of literature shows that there are still open-end issues in this regard and hence, the proposed system offers a solution to address this problem. This paper presents a model which is capable of performing an optimization of resources when CRN is integrated in IoT using five generation (5G) network. The implementation uses analytical modeling to frame up the process of topology construction for IoT and optimizing the resources by introducing a simplified data transmission mechanism in IoT environment. The study outcome shows proposed system to excel better performance with respect to throughput and response time in comparison to existing schemes

    Twin Delayed DDPG based Dynamic Power Allocation for Mobility in IoRT

    Get PDF
    The internet of robotic things (IoRT) is a modern as well as fast-evolving technology employed in abundant socio-economical aspects which connect user equipment (UE) for communication and data transfer among each other. For ensuring the quality of service (QoS) in IoRT applications, radio resources, for example, transmitting power allocation (PA), interference management, throughput maximization etc., should be efficiently employed and allocated among UE. Traditionally, resource allocation has been formulated using optimization problems, which are then solved using mathematical computer techniques. However, those optimization problems are generally nonconvex as well as nondeterministic polynomial-time hardness (NP-hard). In this paper, one of the most crucial challenges in radio resource management is the emitting power of an antenna called PA, considering that the interfering multiple access channel (IMAC) has been considered. In addition, UE has a natural movement behavior that directly impacts the channel condition between remote radio head (RRH) and UE. Additionally, we have considered two well-known UE mobility models i) random walk and ii) modified Gauss-Markov (GM). As a result, the simulation environment is more realistic and complex. A data-driven as well as model-free continuous action based deep reinforcement learning algorithm called twin delayed deep deterministic policy gradient (TD3) has been proposed that is the combination of policy gradient, actor-critics, as well as double deep Q-learning (DDQL). It optimizes the PA for i) stationary UE, ii) the UE movements according to random walk model, and ii) the UE movement based on the modified GM model. Simulation results show that the proposed TD3 method outperforms model-based techniques like weighted MMSE (WMMSE) and fractional programming (FP) as well as model-free algorithms, for example, deep Q network (DQN) and DDPG in terms of average sum-rate performance

    On the Latency-Energy Performance of NB-IoT Systems in Providing Wide-Area IoT Connectivity

    Get PDF

    Looking at NB-IoT over LEO Satellite Systems: Design and Evaluation of a Service-Oriented Solution

    Get PDF
    The adoption of the NB-IoT technology in satellite communications intends to boost Internet of Things services beyond the boundaries imposed by the current terrestrial infrastructures. Apart from link-level studies in the scientific literature and preliminary 3GPP technical reports, the overall debate is still open. To provide a further step forward in this direction, the work presented herein pursues a novel service-oriented methodology to design an effective solution, meticulously stitched around application requirements and technological constraints. To this end, it conducts link-level and system-level investigations to tune physical transmissions, satellite constellation, and protocol architecture, while ensuring the expected system behavior. To offer a real smart agriculture service operating in Europe, the resulting solution exploits 24 Low Earth Orbit satellites, grouped into 8 different orbits, moving at an altitude of 500 km. The configured protocol stack supports the transmission of tens of bytes generated at the application layer, by also counteracting the issues introduced by the satellite link. Since each satellite has the whole protocol stack on-board, terminals can transmit data without the need for the feeder link. This ensures communication latencies ranging from 16 minutes to 75 minutes, depending on the served number of terminals and the physical transmission settings. Moreover, the usage of the Early Data Transmission scheme reduces communication latencies up to 40%. These results pave the way towards the deployment of an effective proof-of-concept, which drastically reduces the time-to-market imposed by the current state of the art

    NB-IoT via LEO satellites: An efficient resource allocation strategy for uplink data transmission

    Get PDF
    In this paper, we focus on the use of Low-Eart Orbit (LEO) satellites providing the Narrowband Internet of Things (NB-IoT) connectivity to the on-ground user equipment (UEs). Conventional resource allocation algorithms for the NBIoT systems are particularly designed for terrestrial infrastructures, where devices are under the coverage of a specific base station and the whole system varies very slowly in time. The existing methods in the literature cannot be applied over LEO satellite-based NB-IoT systems for several reasons. First, with the movement of the LEO satellite, the corresponding channel parameters for each user will quickly change over time. Delaying the scheduling of a certain user would result in a resource allocation based on outdated parameters. Second, the differential Doppler shift, which is a typical impairment in communications over LEO, directly depends on the relative distance among users. Scheduling at the same radio frame users that overcome a certain distance would violate the differential Doppler limit supported by the NB-IoT standard. Third, the propagation delay over a LEO satellite channel is around 4-16 times higher compared to a terrestrial system, imposing the need for message exchange minimization between the users and the base station. In this work, we propose a novel uplink resource allocation strategy that jointly incorporates the new design considerations previously mentioned together with the distinct channel conditions, satellite coverage times and data demands of various users on Earth. The novel methodology proposed in this paper can act as a framework for future works in the field.Comment: Tis work has been submitted to the IEEE IoT Journal for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl
    corecore