110 research outputs found

    Semi-persistent RRC protocol for machine-type communication devices in LTE networks

    Get PDF
    In this paper, we investigate the design of a radio resource control (RRC) protocol in the framework of long-term evolution (LTE) of the 3rd Generation Partnership Project regarding provision of low cost/complexity and low energy consumption machine-type communication (MTC), which is an enabling technology for the emerging paradigm of the Internet of Things. Due to the nature and envisaged battery-operated long-life operation of MTC devices without human intervention, energy efficiency becomes extremely important. This paper elaborates the state-of-the-art approaches toward addressing the challenge in relation to the low energy consumption operation of MTC devices, and proposes a novel RRC protocol design, namely, semi-persistent RRC state transition (SPRST), where the RRC state transition is no longer triggered by incoming traffic but depends on pre-determined parameters based on the traffic pattern obtained by exploiting the network memory. The proposed RRC protocol can easily co-exist with the legacy RRC protocol in the LTE. The design criterion of SPRST is derived and the signalling procedure is investigated accordingly. Based upon the simulation results, it is shown that the SPRST significantly reduces both the energy consumption and the signalling overhead while at the same time guarantees the quality of service requirements

    Discontinuous Reception for Multiple-Beam Communication

    Get PDF
    This is the final version. Available from IEEE via the DOI in this recordDiscontinuous reception (DRX) techniques have successfully been proposed for energy savings in 4G radio access systems, which are deployed on legacy 2GHz spectrum bands with signal features of omni-directional propagation. In upcoming 5G systems, higher frequency spectrum bands will also be utilized. Unfortunately higher frequency bands encounter more significant path loss, thus requiring directional beamforming to aggregate the radiant signal in a certain direction. We, therefore, propose a DRX scheme for multiple beam (DRXB) communication scenarios. The proposed DRXB scheme is designed to avoid unnecessary energy-and-time-consuming beam-training procedures, which enables longer sleep periods and shorter wake-up latency. We provide an analytical model to investigate the receiver-side energy efficiency and transmission latency of the proposed scheme. Through simulations, our approach is shown to have clear performance improvements over the conventional DRX scheme where beam training is conducted in each DRX cycle.Swedish Research CouncilNational Natural Science Foundation of ChinaEuropean Union Horizon 202

    Narrowband IoT: from the end device to the cloud. An experimental end-to-end study

    Get PDF
    This thesis is about a novel study and experimentation of a Cloud IoT application, communicating over a NB-IoT Italian network. So far there no been presented studies, which are about the interactions between the NB-IoT network and the cloud. This thesis not only fill this gap but also shows the use of Cognitive Services to interact, through the human voice, with the IoT application. Compared with other types of mobile networks, NB-IoT is the best choice

    Towards efficient support for massive Internet of Things over cellular networks

    Get PDF
    The usage of Internet of Things (IoT) devices over cellular networks is seeing tremendous growth in recent years, and that growth in only expected to increase in the near future. While existing 4G and 5G cellular networks offer several desirable features for this type of applications, their design has historically focused on accommodating traditional mobile devices (e.g. smartphones). As IoT devices have very different characteristics and use cases, they create a range of problems to current networks which often struggle to accommodate them at scale. Although newer cellular network technologies, such as Narrowband-IoT (NB-IoT), were designed to focus on the IoT characteristics, they were extensively based on 4G and 5G networks to preserve interoperability, and decrease their deployment cost. As such, several inefficiencies of 4G/5G were also carried over to the newer technologies. This thesis focuses on identifying the core issues that hinder the large scale deployment of IoT over cellular networks, and proposes novel protocols to largely alleviate them. We find that the most significant challenges arise mainly in three distinct areas: connection establishment, network resource utilisation and device energy efficiency. Specifically, we make the following contributions. First, we focus on the connection establishment process and argue that the current procedures, when used by IoT devices, result in increased numbers of collisions, network outages and a signalling overhead that is disproportionate to the size of the data transmitted, and the connection duration of IoT devices. Therefore, we propose two mechanisms to alleviate these inefficiencies. Our first mechanism, named ASPIS, focuses on both the number of collisions and the signalling overhead simultaneously, and provides enhancements to increase the number of successful IoT connections, without disrupting existing background traffic. Our second mechanism focuses specifically on the collisions at the connection establishment process, and used a novel approach with Reinforcement Learning, to decrease their number and allow a larger number of IoT devices to access the network with fewer attempts. Second, we propose a new multicasting mechanism to reduce network resource utilisation in NB-IoT networks, by delivering common content (e.g. firmware updates) to multiple similar devices simultaneously. Notably, our mechanism is both more efficient during multicast data transmission, but also frees up resources that would otherwise be perpetually reserved for multicast signalling under the existing scheme. Finally, we focus on energy efficiency and propose novel protocols that are designed for the unique usage characteristics of NB-IoT devices, in order to reduce the device power consumption. Towards this end, we perform a detailed energy consumption analysis, which we use as a basis to develop an energy consumption model for realistic energy consumption assessment. We then take the insights from our analysis, and propose optimisations to significantly reduce the energy consumption of IoT devices, and assess their performance

    Apport de la Qualité de l’Expérience dans l’optimisation de services multimédia : application à la diffusion de la vidéo et à la VoIP

    Get PDF
    The emerging and fast growth of multimedia services have created new challenges for network service providers in order to guarantee the best user's Quality of Experience (QoE) in diverse networks with distinctive access technologies. Usually, various methods and techniques are used to predict the user satisfaction level by studying the combined impact of numerous factors. In this thesis, we consider two important multimedia services to evaluate the user perception, which are: video streaming service, and VoIP. This study investigates user's QoE that follows three directions: (1) methodologies for subjective QoE assessment of video services, (2) regulating user's QoE using video a rate adaptive algorithm, and (3) QoE-based power efficient resource allocation methods for Long Term Evaluation-Advanced (LTE-A) for VoIP. Initially, we describe two subjective methods to collect the dataset for assessing the user's QoE. The subjectively collected dataset is used to investigate the influence of different parameters (e.g. QoS, video types, user profile, etc.) on user satisfaction while using the video services. Later, we propose a client-based HTTP rate adaptive video streaming algorithm over TCP protocol to regulate the user's QoE. The proposed method considers three Quality of Service (QoS) parameters that govern the user perception, which are: Bandwidth, Buffer, and dropped Frame rate (BBF). The BBF method dynamically selects the suitable video quality according to network conditions and user's device properties. Lastly, we propose a QoE driven downlink scheduling method, i.e. QoE Power Escient Method (QEPEM) for LTE-A. It esciently allocates the radio resources, and optimizes the use of User Equipment (UE) power utilizing the Discontinuous Reception (DRX) method in LTE-AL'émergence et la croissance rapide des services multimédia dans les réseaux IP ont créé de nouveaux défis pour les fournisseurs de services réseau, qui, au-delà de la Qualité de Service (QoS) issue des paramètres techniques de leur réseau, doivent aussi garantir la meilleure qualité de perception utilisateur (Quality of Experience, QoE) dans des réseaux variés avec différentes technologies d'accès. Habituellement, différentes méthodes et techniques sont utilisées pour prédire le niveau de satisfaction de l'utilisateur, en analysant l'effet combiné de multiples facteurs. Dans cette thèse, nous nous intéressons à la commande du réseau en intégrant à la fois des aspects qualitatifs (perception du niveau de satisfaction de l'usager) et quantitatifs (mesure de paramètres réseau) dans l'objectif de développer des mécanismes capables, à la fois, de s'adapter à la variabilité des mesures collectées et d'améliorer la qualité de perception. Pour ce faire, nous avons étudié le cas de deux services multimédia populaires, qui sont : le streaming vidéo, et la voix sur IP (VoIP). Nous investiguons la QoE utilisateur de ces services selon trois aspects : (1) les méthodologies d'évaluation subjective de la QoE, dans le cadre d'un service vidéo, (2) les techniques d'adaptation de flux vidéo pour garantir un certain niveau de QoE, et (3) les méthodes d'allocation de ressource, tenant compte de la QoE tout en économisant l'énergie, dans le cadre d'un service de VoIP (LTE-A). Nous présentons d'abord deux méthodes pour récolter des jeux de données relatifs à la QoE. Nous utilisons ensuite ces jeux de données (issus des campagnes d'évaluation subjective que nous avons menées) pour comprendre l'influence de différents paramètres (réseau, terminal, profil utilisateur) sur la perception d'un utilisateur d'un service vidéo. Nous proposons ensuite un algorithme de streaming vidéo adaptatif, implémenté dans un client HTTP, et dont le but est d'assurer un certain niveau de QoE et le comparons à l'état de l'art. Notre algorithme tient compte de trois paramètres de QoS (bande passante, taille de mémoires tampons de réception et taux de pertes de paquets) et sélectionne dynamiquement la qualité vidéo appropriée en fonction des conditions du réseau et des propriétés du terminal de l'utilisateur. Enfin, nous proposons QEPEM (QoE Power Efficient Method), un algorithme d'ordonnancement basé sur la QoE, dans le cadre d'un réseau sans fil LTE, en nous intéressant à une allocation dynamique des ressources radio en tenant compte de la consommation énergétiqu

    Monitoring and testing in LTE networks: from experimental analysis to operational optimisation

    Get PDF
    L'avvento di LTE e LTE-Adavanced, e la loro integrazione con le esistenti tecnologie cellulari, GSM e UMTS, ha costretto gli operatori di rete radiomobile ad eseguire una meticolosa campagna di test e a dotarsi del giusto know-how per rilevare potenziali problemi durante il dispiegamento di nuovi servizi. In questo nuovo scenario di rete, la caratterizzazione e il monitoraggio del traffico nonchè la configurazione e l'affidibilità degli apparati di rete, sono di importanza fondamentale al fine di prevenire possibili insidie durante la distribuzione di nuovi servizi e garantire la migliore esperienza utente possibile. Sulla base di queste osservazioni, questa tesi di dottorato offre un percorso completo di studio che va da un'analisi sperimentale ad un'ottimizzazione operativa. Il punto di partenza del nostro lavoro è stato il monitoraggio del traffico di un eNodeB di campo con tre celle, operativo nella banda 1800 MHz. Tramite campagne di misura successive, è stato possibile seguire l'evoluzione della rete 4G dagli albori del suo dispiegamento nel 2012, fino alla sua completa maturazione nel 2015. I dati raccolti durante il primo anno, evidenziavano uno scarso utilizzo della rete LTE, dovuto essenzialmente alla limitata penetrazione dei nuovi smartphone 4G. Nel 2015, invece, abbiamo assistito ad un aumento netto e decisivo del numero di utenti che utilizzano la tecnolgia LTE, con statistiche aggregate (come gli indici di marketshare per i sistemi operativi degli smartphones, o la percentuale di traffico video) che rispecchiano i trend nazionali e internazionali. Questo importante risultato testimonia la maturità della tecnologia LTE, e ci permette di considerare il nostro eNodeB un punto di osservazione prezioso per l'analisi del traffico. Di pari passo con l'evoluzione dell'infrastruttura, anche i telefoni cellulari hanno avuto una sorprendente evoluzione nel corso degli ultimi due decenni, a partire da dispositivi semplici con servizi di sola voce, fino agli smartphone di ultima generazione che offrono servizi innovativi, come Internet mobile, geolocalizzazione e mappe, servizi multimediali, e molti altri. Monitorare il traffico reale ci ha quindi permesso di studiare il comportamento degli utenti e individuare i servizi maggiormente utilizzati. Per questo, sono state sviluppate diverse librerie software per l'analisi del traffico. In particolare, è stato sviluppato in C++14 un framework/tool per la classificazione del traffico. Il progetto, disponibile su github, si chiama MOSEC, un acronimo per MOdular SErvice Classifier. MOSEC consente di definire e utilizzare un numero arbitrario di plug-in, che processano il pacchetto secondo le loro logiche e possono o no ritornare un valore di classificazione. Una strategia di decisione finale consente di classificare i vari flussi, basandosi sulle classificazioni di ciascun plug-in. Abbiamo quindi validato la bontà del processi di classificazione di MOSEC utilizzando una traccia labellata come ground-truth di classificazione. I risultati mostrano una eccellente capacità di classificazione di traffico TCP-HTTP/HTTPS, mediamente superiore a quella di altri tool di classificazione (nDPI, PACE, Layer-7), ed evidenzia alcune lacune per quanto riguarda la classificazione di traffico UDP. Le carattistiche dei flussi di traffico utente (User Plane) hanno un impatto diretto sul consumo energetico dei terminali e indiretto sul traffico di controllo (Control Plane) che viene generato. Pertanto, la conoscenza delle proprietà statistiche dei vari flussi consente di affrontare un problema del cross-layer optimization, per ridurre il consumo energetico dei terminali variando dei parametri configurabili sugli eNodeB. E' noto che la durata della batteria dei nuovi smartphone, rappresenta uno dei maggiori limiti nell'utilizzo degli stessi. In particolare, lo sviluppo di nuovi servizi e applicazioni capaci di lavorare in background, senza la diretta interazione dell’utente, ha introdotto nuovi problemi riguardanti la durata delle batterie degli smartphone e il traffico di segnalazione necessario ad acquisire/rilasciare le risorse radio. In conformità a queste osservazioni, è stato condotto uno studio approfondito sul meccanismo DRX (Discontinuous Reception), usato in LTE per consentire all’utente di risparmiare energia quando nessun pacchetto è inviato o ricevuto. I parametri DRX e RRC Inactivity Timer influenzano notevolmente l’energia consumata dai vari device. A seconda che le risorse radio siano assegnate o meno, l’UE si trova rispettivamente negli stati di RRC Connected e RRC Idle. Per valutare il consumo energetico degli smartphone, è stato sviluppato un algoritmo che associa un valore di potenza a ciascuno degli stati in cui l’UE può trovarsi. La transizione da uno stato all’altro è regolata da diversi timeout che sono resettati ogni volta che un pacchetto è inviato o ricevuto. Utilizzando le tracce di traffico reale, è stata associata una macchina a stati a ogni UE per valutare il consumo energetico sulla base dei pacchetti inviati e ricevuti. Osservando le caratteristiche statistiche del traffico User Plane è stata ripetuta la simulazione utilizzando dei valori dell’Inactivity Timer diversi da quello impiegato negli eNodeB di rete reale, alla ricerca di un buon trade-off tra risparmio energetico e aumento del traffico di segnalazione. I risultati hanno permesso di determinare che l'Inactivity Timer, settato originariamente sull'eNodeB era troppo elevato e determinava un consumo energetico eccesivo sui terminali. Diminuendone il valore fino a 10 secondi, si può ottenere un risparmio energetico fino al 50\% (a secondo del traffico generato) senza aumentare considerevolemente il traffico di controllo. I risultati dello studio di cui sopra, tuttavia, non tengono in considerazione lo stato di stress cui può essere sottoposto un eNodeB per effetto dell'aumento del traffico di segnalazione, nè, tantomeno, dell'aumento della contesa di accesso alla rete durante la procedura di RACH, necessaria per ristabilire il bearer radio (o connessione RRC) tra terminale ed eNodeB. Valutare le performance di sistemi hardware e software per la rete mobile di quarta generazione, cosi come individuare qualsiasi possibile debolezza all’interno dell’architettura, è un lavoro complesso. Un possibile caso di studio, è proprio quello di valutare la robustezza delle Base Station quando riceve molte richieste di connessioni RRC, per effetto di una diminuzione dell'Inactivity Timer. A tal proposito, all’interno del Testing LAB di Telecom Italia, abbiamo utilizzato IxLoad, un prodotto sviluppato da Ixia, come generatore di carico per testare la robustezza di un eNodeB. I test sono consistiti nel produrre un differente carico di richieste RRC sull'interfaccia radio, similmente a quelle che si avrebbero diminuendo l'Inactivity Timer. Le proprietà statistiche del traffico di controllo sono ricavate a partire dall'analisi dalle tracce di traffico reale. I risultati hanno dimostrato che, anche a fronte di un carico sostenuto di richieste RRC solo una minima parte (percentuale inferiore all'1\% nel caso più sfavorevole) di procedure fallisce. Abbassare l'inactivity timer anche a valori inferiori ai 10 secondi non è quindi un problema per la Base Station. Rimane da valutare, infine, cosa succede a seguito dell'aumento delle richieste di accesso al canale RACH, dal punto di vista degli utenti. Quando due o più utenti tentano, simultaneamente, di accedere al canale RACH, utilizzando lo stesso preambolo, l’eNodeB potrebbe non essere in grado di decifrare il preambolo. Se i due segnali interferiscono costruttivamente, entrambi gli utenti riceveranno le stesse risorse per trasmettere il messaggio di RRC Request e, a questo punto, l’eNodeB può individuare la collisione e non trasmetterà nessun acknowledgement, forzando entrambi gli utenti a ricominciare la procedura dall’inizio. Abbiamo quindi proposto un modello analitico per calcolare la probabilità di collisione in funzione del numero di utenti e del carico di traffico offerto, quando i tempi d’interarrivo tra richieste successive é modellata con tempi iper-esponenziali. In più, abbiamo investigato le prestazioni di comunicazioni di tipo Machine-to-Machine (M2M) e Human-to-Human (H2H), valutando, al variare del numero di preamboli utilizzati, la probabilità di collisione su canale RACH, la probabilità di corretta trasmissione considerando sia il tempo di backoff che il numero massimo di ritrasmissioni consentite, e il tempo medio necessario per stabilire un canale radio con la rete di accesso. I risultati, valutati nel loro insieme, hanno consentito di esprimere delle linee guida per ripartire opportunamente il numero di preamboli tra comunicazioni M2M e H2H. The advent of LTE and LTE-Advanced, and their integration with existing cellular technologies, GSM and UMTS, has forced the mobile radio network operators to perform meticulous tests and adopt the right know-how to detect potential new issues, before the activation of new services. In this new network scenario, traffic characterisation and monitoring as well as configuration and on-air reliability of network equipment, is of paramount relevance in order to prevent possible pitfalls during the deployment of new services and ensure the best possible user experience. Based on this observation, this research project offers a comprehensive study that goes from experimental analysis to operational optimization. The starting point of our work has been monitoring the traffic of an already deployed eNodeB with three cells, operative in the 1800 MHz band. Through subsequent measurement campaigns, it was possible to follow the evolution of the 4G network by the beginning of its deployment in 2012, until its full maturity in 2015. The data collected during the first year, showed a poor use of the LTE network, mainly due to the limited penetration of new 4G smartphone. In 2015, however, we appreciate a clear and decisive increase in the number of terminals using LTE, with aggregate statistics (e.g. marketshare for smartphone operating systems, or the percentage of video traffic) that reflect the national trend. This important outcome testifies the maturity of LTE technology, and allows us to consider our monitored eNodeB as a valuable vantage point for traffic analysis. Hand in hand with the evolution of the infrastructure, even mobile phones have had a surprising evolution over the past two decades, from simple devices with only voice services, towards smartphones offering novel services such as mobile Internet, geolocation and maps, multimedia services, and many more. Monitoring the real traffic has allowed us to study the users behavior and identify the services most used. To this aim, various software libraries for traffic analysis have been developed. In particular, we developed a C/C++ library that analyses Control Plane and User Plane traffic, which provides corse and fined-grained statistics at flow-level. Another framework/tool has been exclusively dedicated to the topic of traffic classification. Among the plethora of existing tool for traffic classification we provide our own solution, developed from scratch. The project, which is available on github, is named MOSEC, an acronym for Modular SErvice Classifier. The modularity is given by the possibility to implement multiple plug-ins, each one will process the packet according to its logic, and may or may not return a packet/flow classification. A final decision strategy allows to classify the various streams, based on the classifications of each plug-in. Despite previous approaches, the ability of keeping together multiple classifiers allows to mitigate the deficiency of each classifiers (e.g. DPI\nomenclature{DPI}{Deep Packet Inspection} does not work when packets are encrypted or DNS\nomenclature{DNS}{Domain Name Server} queries don't have to be sent if name resolution is cached in device memory) and exploit their full-capabilities when it is feasible. We validated the goodness of MOSEC using a labelled trace synthetically created by colleagues from UPC BarcelonaTech. The results show excellent TCP-HTTP/HTTPS traffic classification capabilities, higher, on average, than those of other classification tools (NDPI, PACE, Layer-7). On the other hand, there are some shortcomings with regard to the classification of UDP traffic. The characteristics of User Plane traffic have a direct impact on the energy consumed by the handset devices, and an indirect impact on the Control Plane traffic that is generated. Therefore, the acquaintances of the statistical properties of the various flows, allows us to deal with the problem of cross-layer optimization, that is reducing the power consumption of the terminals by varying some control plane parameters configurable on the eNodeB. It is well known that the battery life of the new smartphones is one of the major limitations in the use of the same. In particular, the birth of new services and applications capable of working in the background without direct user interaction, introduced new issues related to the battery lifetime and the signaling traffic necessary to acquire/release the radio resources. Based on these observations, we conducted a thorough study on the DRX mechanism (Discontinuous Reception), exploited by LTE to save smartphones energy when no packet is sent or received. The DRX configuration set and the RRC Inactivity Timer greatly affect the energy consumed by the various devices. Depending on which radio resources are allocated or not, the user equipment is in the states of RRC Connected and Idle, respectively. To evaluate the energy consumption of smartphones, an algorithm simulates the transition between all the possible states in which an UE can be and maps a power value to each of these states. The transition from one state to another is governed by different timeouts that are reset every time a packet is sent or received. Using the traces of real traffic, we associate a state machine to each for assessing the energy consumption on the basis of the sent and received packets. We repeated these simulations using different values of the inactivity timer, that appear to be more suitable than the one currently configured on the monitored eNodeB, looking for a good trade-off between energy savings and increased signaling traffic. The results highlighted that the Inactivity Timer set originally sull'eNodeB was too high and determined an excessive energy consumption on the terminals. Reducing the value up to 10 seconds permits to achieve energy savings of up to 50\% (depending on the underling traffic profile) without up considerably the control traffic. The results of the study mentioned above, however, do not consider neither the stress level which the eNodeB is subject to, given the raise of signaling traffic that could occur, nor the increase of collision probability during the RACH procedure, needed to re-establish the radio bearer (or RRC connection ) between the terminal and eNodeB . Evaluate the performance of hardware and software systems for the fourth-generation mobile network, as well as identify any possible weakness in the architecture, it is a complex job. A possible case study, is precisely to assess the robustness of the base station when it receives many requests for RRC connections, as effect of a decrease of the inactivity timer. In this regard, within the Testing LAB of Telecom Italia, we used IxLoad, a product developed by Ixia, as a load generator to test the robustness of one eNodeB. The tests consisted in producing a different load of RRC request on the radio interface, similar to those that would be produced by decreasing the inactivity timer to certain values. The statistical properties for the signalling traffic are derived from the analysis of real traffic traces. The main outcomes have shown that, even in the face of an high load of RRC requests only a small part (less than 1\% in the most unfavorable of the cases) of the procedure fails. Therefore, even lowering the inactivity timer at values lower than 10 seconds is not an issue for the Base Station. Finally, remains to be evaluated how such surge of RRC request impacts on users performance. If one of the users under coverage in the RRC Idle is paged for an incoming packet or need to send an uplink packet a state transition from RRC Idle to RRC Connected is needed. At this point, the UE initiates the random access procedure by sending the random access channel preamble (RACH Preamble). When two or more users attempt, simultaneously, to access the RACH channel, using the same preamble, the eNodeB may not be able to decipher the preamble. If the two signals interfere constructively, both users receive the same resources for transmitting the RRC Request message and, at this point, the eNodeB can detect the collision and will not send any acknowledgment, forcing both users to restart the procedure from the beginning. We have proposed an analytical model to calculate the probability of a collision based on the number of users and the offered traffic load, when the interarrival time between requests is modeled with hyper-exponential times. In addition, we investigated some performance for Machine-to-Machine (M2M) and Human-to-Human (H2H) type communications, including the probability of correct transmission considering either the backoff time either the maximum number of allowed retransmissions, and the average time required to established a radio bearer with the access network. The results, considered as a whole, have made possible to express the guidelines to properly distribute the number of preambles in H2H and M2M communications

    Standardization of Extended Reality (XR) over 5G and 5G-Advanced 3GPP New Radio

    Full text link
    Extended Reality (XR) is one of the major innovations to be introduced in 5G/5G-Advanced communication systems. A combination of augmented reality, virtual reality, and mixed reality, supplemented by cloud gaming, revisits the way how humans interact with computers, networks, and each other. However, efficient support of XR services imposes new challenges for existing and future wireless networks. This article presents a tutorial on integrating support for the XR into the 3GPP New Radio (NR), summarizing a range of activities handled within various 3GPP Service and Systems Aspects (SA) and Radio Access Networks (RAN) groups. The article also delivers a case study evaluating the performance of different XR services in state-of-the-art NR Release 17. The paper concludes with a vision of further enhancements to better support XR in future NR releases and outlines open problems in this area.Comment: 7 pages, 4 figures, 2 tables. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    On the Latency-Energy Performance of NB-IoT Systems in Providing Wide-Area IoT Connectivity

    Get PDF
    • …
    corecore