16,859 research outputs found

    Reliable and Low-Latency Fronthaul for Tactile Internet Applications

    Get PDF
    With the emergence of Cloud-RAN as one of the dominant architectural solutions for next-generation mobile networks, the reliability and latency on the fronthaul (FH) segment become critical performance metrics for applications such as the Tactile Internet. Ensuring FH performance is further complicated by the switch from point-to-point dedicated FH links to packet-based multi-hop FH networks. This change is largely justified by the fact that packet-based fronthauling allows the deployment of FH networks on the existing Ethernet infrastructure. This paper proposes to improve reliability and latency of packet-based fronthauling by means of multi-path diversity and erasure coding of the MAC frames transported by the FH network. Under a probabilistic model that assumes a single service, the average latency required to obtain reliable FH transport and the reliability-latency trade-off are first investigated. The analytical results are then validated and complemented by a numerical study that accounts for the coexistence of enhanced Mobile BroadBand (eMBB) and Ultra-Reliable Low-Latency (URLLC) services in 5G networks by comparing orthogonal and non-orthogonal sharing of FH resources.Comment: 11pages, 13 figures, 3 bio photo

    Matching Subcarrier Resource Allocation and Offloading Decision

    Get PDF
    A heterogeneous cellular network can define as a network that is composed of different cell sizes (macrocell (MeNB), small cell (SeNB), femtocell). Such heterogeneity of network is the backbone of the 5G networks where new applications on mobile devices need extensive computing power consumption and ultra-low latency constraints. Using a heterogeneous network will provide multiple paths in which the users’ data can flow through the network depending on the users’ available resources, remaining energy, etc. we study the heterogeneous network model, which contains MeNB, SeNB and femtocells. Also, we propose a matching subcarrier resource allocation and offloading decision (MSRAOD) algorithm, that depends on recourse allocation optimization. The main optimization goal has been set to minimize the total energy consumption of mobile users’ devices with acceptable latency requirements of the applications. Our proposed algorithm results show that the proposed algorithm enhances the average energy consumption of mobile users in the heterogeneous network

    Business Case and Technology Analysis for 5G Low Latency Applications

    Get PDF
    A large number of new consumer and industrial applications are likely to change the classic operator's business models and provide a wide range of new markets to enter. This article analyses the most relevant 5G use cases that require ultra-low latency, from both technical and business perspectives. Low latency services pose challenging requirements to the network, and to fulfill them operators need to invest in costly changes in their network. In this sense, it is not clear whether such investments are going to be amortized with these new business models. In light of this, specific applications and requirements are described and the potential market benefits for operators are analysed. Conclusions show that operators have clear opportunities to add value and position themselves strongly with the increasing number of services to be provided by 5G.Comment: 18 pages, 5 figure

    Distribution of Low Latency Machine Learning Algorithm

    Get PDF
    Mobile networks are evolving towards centralization and cloudification while bringing computing power to the edge, opening its scope to a new range of applications. Ultra-low latency is one of the requirements of such applications in the next generation of mobile networks (5G), where deep learning is expected to play a big role. Hence, to enable the usage of deep learning solutions on the edge cloud, ultra-low latency inference must be investigated. The study presented here relies on the usage of an in-house framework (CRUN) that enables the distribution of acceleration on data center environment. The objective of this thesis is to leverage the best solution for the inference of a machine learning algorithm for an anomaly detection application using neural networks in the edge cloud context. To evaluate the obtained results with CRUN a comparison work is also carried out. Five inference solutions were compared using CPU, GPU and FPGA. The results show a superior performance in terms of latency for all CRUN experiments, that basically comprehends three cases. The first one utilizing the RTL anomaly detection neural network as a baseline solution, the second using the same baseline code but unrolling the biggest layer for obtaining reduced latency and the third by distributing the neural network in two FPGAs. The requirements for this solution were to obtain latency between 20 μs to 40 μs for inference time and at least 20000 inferences per second. These goals were categorically fulfilled for all CRUN experiments, providing 30 μs latency in average, while the second best solution provided 272 μs
    • …
    corecore