110 research outputs found

    Complexity-Aware Scheduling for an LDPC Encoded C-RAN Uplink

    Full text link
    Centralized Radio Access Network (C-RAN) is a new paradigm for wireless networks that centralizes the signal processing in a computing cloud, allowing commodity computational resources to be pooled. While C-RAN improves utilization and efficiency, the computational load occasionally exceeds the available resources, creating a computational outage. This paper provides a mathematical characterization of the computational outage probability for low-density parity check (LDPC) codes, a common class of error-correcting codes. For tractability, a binary erasures channel is assumed. Using the concept of density evolution, the computational demand is determined for a given ensemble of codes as a function of the erasure probability. The analysis reveals a trade-off: aggressively signaling at a high rate stresses the computing pool, while conservatively backing-off the rate can avoid computational outages. Motivated by this trade-off, an effective computationally aware scheduling algorithm is developed that balances demands for high throughput and low outage rates.Comment: Conference on Information Sciences and Systems (CISS) 2017, to appea

    Complexity aware C-RAN scheduling for LDPC codes over BEC

    Get PDF
    Effective transmission of data over a noisy wireless channel is a vital part of today\u27s high speed technology driven society. In a wireless cell network, information is sent from mobile users to base stations. The information being transmitted is protected by error-control codes. In a conventional architecture the signal processing, including error-control decoding, is performed locally at each base station. Recently, a new architecture has emerged called Centralized Radio Access Network (C-RAN), which involves the centralized processing of the signals in a computing cloud. Using a computing cloud allows computational resources to be pooled, which improves utilization and efficiency. When the computational resources are finite and when the computational load varies over time, then there is a chance that the load exceeds the available resources. This situation creates a so-called computational outage, which has characteristics that are similar to outages caused by channel fading or interference. In this report, the computational complexity is quantified for a common class of error-correcting codes known as low-density parity check (LDPC) codes. To make the analysis tractable, a binary erasure channel is assumed. The concept of density evolution is used to obtain the complexity as a function of the code design parameters and the signal-to-interference-plus-noise ratio (SINR) of the channel. The analysis shows that there is a trade-off in that aggressively signaling at a high data rate causes high computational demands, while conservatively backing off on the rate can dramatically reduce the computational demand. Motivated by this trade-off, a scheduling algorithm is developed that balances the demands for high throughput and low computational outage rates

    Random Linear Network Coding for 5G Mobile Video Delivery

    Get PDF
    An exponential increase in mobile video delivery will continue with the demand for higher resolution, multi-view and large-scale multicast video services. Novel fifth generation (5G) 3GPP New Radio (NR) standard will bring a number of new opportunities for optimizing video delivery across both 5G core and radio access networks. One of the promising approaches for video quality adaptation, throughput enhancement and erasure protection is the use of packet-level random linear network coding (RLNC). In this review paper, we discuss the integration of RLNC into the 5G NR standard, building upon the ideas and opportunities identified in 4G LTE. We explicitly identify and discuss in detail novel 5G NR features that provide support for RLNC-based video delivery in 5G, thus pointing out to the promising avenues for future research.Comment: Invited paper for Special Issue "Network and Rateless Coding for Video Streaming" - MDPI Informatio

    Design, implementation and experimental evaluation of a network-slicing aware mobile protocol stack

    Get PDF
    Mención Internacional en el título de doctorWith the arrival of new generation mobile networks, we currently observe a paradigm shift, where monolithic network functions running on dedicated hardware are now implemented as software pieces that can be virtualized on general purpose hardware platforms. This paradigm shift stands on the softwarization of network functions and the adoption of virtualization techniques. Network Function Virtualization (NFV) comprises softwarization of network elements and virtualization of these components. It brings multiple advantages: (i) Flexibility, allowing an easy management of the virtual network functions (VNFs) (deploy, start, stop or update); (ii) efficiency, resources can be adequately consumed due to the increased flexibility of the network infrastructure; and (iii) reduced costs, due to the ability of sharing hardware resources. To this end, multiple challenges must be addressed to effectively leverage of all these benefits. Network Function Virtualization envisioned the concept of virtual network, resulting in a key enabler of 5G networks flexibility, Network Slicing. This new paradigm represents a new way to operate mobile networks where the underlying infrastructure is "sliced" into logically separated networks that can be customized to the specific needs of the tenant. This approach also enables the ability of instantiate VNFs at different locations of the infrastructure, choosing their optimal placement based on parameters such as the requirements of the service traversing the slice or the available resources. This decision process is called orchestration and involves all the VNFs withing the same network slice. The orchestrator is the entity in charge of managing network slices. Hands-on experiments on network slicing are essential to understand its benefits and limits, and to validate the design and deployment choices. While some network slicing prototypes have been built for Radio Access Networks (RANs), leveraging on the wide availability of radio hardware and open-source software, there is no currently open-source suite for end-to-end network slicing available to the research community. Similarly, orchestration mechanisms must be evaluated as well to properly validate theoretical solutions addressing diverse aspects such as resource assignment or service composition. This thesis contributes on the study of the mobile networks evolution regarding its softwarization and cloudification. We identify software patterns for network function virtualization, including the definition of a novel mobile architecture that squeezes the virtualization architecture by splitting functionality in atomic functions. Then, we effectively design, implement and evaluate of an open-source network slicing implementation. Our results show a per-slice customization without paying the price in terms of performance, also providing a slicing implementation to the research community. Moreover, we propose a framework to flexibly re-orchestrate a virtualized network, allowing on-the-fly re-orchestration without disrupting ongoing services. This framework can greatly improve performance under changing conditions. We evaluate the resulting performance in a realistic network slicing setup, showing the feasibility and advantages of flexible re-orchestration. Lastly and following the required re-design of network functions envisioned during the study of the evolution of mobile networks, we present a novel pipeline architecture specifically engineered for 4G/5G Physical Layers virtualized over clouds. The proposed design follows two objectives, resiliency upon unpredictable computing and parallelization to increase efficiency in multi-core clouds. To this end, we employ techniques such as tight deadline control, jitter-absorbing buffers, predictive Hybrid Automatic Repeat Request, and congestion control. Our experimental results show that our cloud-native approach attains > 95% of the theoretical spectrum efficiency in hostile environments where stateof- the-art architectures collapse.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Francisco Valera Pintor.- Secretario: Vincenzo Sciancalepore.- Vocal: Xenofon Fouka

    Characterizing the Computational and Memory Requirements of Virtual RANs

    Get PDF
    The virtualization of radio access networks (RANs) is emerging as a key component of future wireless systems, as it brings agility to the RAN architecture and offers degrees of design freedom. In this paper, we investigate and characterize the computational and memory requirements of virtual RANs. To this end, we build a virtual RAN test-bed leveraging the srsRAN open-source mobile communication platform and general-purpose processor-based servers. Through extensive experiments, we profile the consumption of computing and memory resources, and we assess the system performance. Further, we build regression models to predict the system behavior as the number of connected users increases, under diverse radio transmission settings. In so doing, we develop a methodology and prediction models that can help designing and optimizing virtual RANs

    D11.2 Consolidated results on the performance limits of wireless communications

    Get PDF
    Deliverable D11.2 del projecte europeu NEWCOM#The report presents the Intermediate Results of N# JRAs on Performance Limits of Wireless Communications and highlights the fundamental issues that have been investigated by the WP1.1. The report illustrates the Joint Research Activities (JRAs) already identified during the first year of the project which are currently ongoing. For each activity there is a description, an illustration of the adherence and relevance with the identified fundamental open issues, a short presentation of the preliminary results, and a roadmap for the joint research work in the next year. Appendices for each JRA give technical details on the scientific activity in each JRA.Peer ReviewedPreprin

    Massive MIMO for Internet of Things (IoT) Connectivity

    Full text link
    Massive MIMO is considered to be one of the key technologies in the emerging 5G systems, but also a concept applicable to other wireless systems. Exploiting the large number of degrees of freedom (DoFs) of massive MIMO essential for achieving high spectral efficiency, high data rates and extreme spatial multiplexing of densely distributed users. On the one hand, the benefits of applying massive MIMO for broadband communication are well known and there has been a large body of research on designing communication schemes to support high rates. On the other hand, using massive MIMO for Internet-of-Things (IoT) is still a developing topic, as IoT connectivity has requirements and constraints that are significantly different from the broadband connections. In this paper we investigate the applicability of massive MIMO to IoT connectivity. Specifically, we treat the two generic types of IoT connections envisioned in 5G: massive machine-type communication (mMTC) and ultra-reliable low-latency communication (URLLC). This paper fills this important gap by identifying the opportunities and challenges in exploiting massive MIMO for IoT connectivity. We provide insights into the trade-offs that emerge when massive MIMO is applied to mMTC or URLLC and present a number of suitable communication schemes. The discussion continues to the questions of network slicing of the wireless resources and the use of massive MIMO to simultaneously support IoT connections with very heterogeneous requirements. The main conclusion is that massive MIMO can bring benefits to the scenarios with IoT connectivity, but it requires tight integration of the physical-layer techniques with the protocol design.Comment: Submitted for publicatio

    Random Linear Network Coding for 5G Mobile Video Delivery

    Get PDF
    An exponential increase in mobile video delivery will continue with the demand for higher resolution, multi-view and large-scale multicast video services. Novel fifth generation (5G) 3GPP New Radio (NR) standard will bring a number of new opportunities for optimizing video delivery across both 5G core and radio access networks. One of the promising approaches for video quality adaptation, throughput enhancement and erasure protection is the use of packet-level random linear network coding (RLNC). In this review paper, we discuss the integration of RLNC into the 5G NR standard, building upon the ideas and opportunities identified in 4G LTE. We explicitly identify and discuss in detail novel 5G NR features that provide support for RLNC-based video delivery in 5G, thus pointing out to the promising avenues for future research
    • …
    corecore