575 research outputs found

    Datacenter Design for Future Cloud Radio Access Network.

    Full text link
    Cloud radio access network (C-RAN), an emerging cloud service that combines the traditional radio access network (RAN) with cloud computing technology, has been proposed as a solution to handle the growing energy consumption and cost of the traditional RAN. Through aggregating baseband units (BBUs) in a centralized cloud datacenter, C-RAN reduces energy and cost, and improves wireless throughput and quality of service. However, designing a datacenter for C-RAN has not yet been studied. In this dissertation, I investigate how a datacenter for C-RAN BBUs should be built on commodity servers. I first design WiBench, an open-source benchmark suite containing the key signal processing kernels of many mainstream wireless protocols, and study its characteristics. The characterization study shows that there is abundant data level parallelism (DLP) and thread level parallelism (TLP). Based on this result, I then develop high performance software implementations of C-RAN BBU kernels in C++ and CUDA for both CPUs and GPUs. In addition, I generalize the GPU parallelization techniques of the Turbo decoder to the trellis algorithms, an important family of algorithms that are widely used in data compression and channel coding. Then I evaluate the performance of commodity CPU servers and GPU servers. The study shows that the datacenter with GPU servers can meet the LTE standard throughput with 4× to 16× fewer machines than with CPU servers. A further energy and cost analysis show that GPU servers can save on average 13× more energy and 6× more cost. Thus, I propose the C-RAN datacenter be built using GPUs as a server platform. Next I study resource management techniques to handle the temporal and spatial traffic imbalance in a C-RAN datacenter. I propose a “hill-climbing” power management that combines powering-off GPUs and DVFS to match the temporal C-RAN traffic pattern. Under a practical traffic model, this technique saves 40% of the BBU energy in a GPU-based C-RAN datacenter. For spatial traffic imbalance, I propose three workload distribution techniques to improve load balance and throughput. Among all three techniques, pipelining packets has the most throughput improvement at 10% and 16% for balanced and unbalanced loads, respectively.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120825/1/qizheng_1.pd

    Artificial intelligence enhances the performance of chaotic baseband wireless communication

    Get PDF
    Funding Information: This work was supported in part by Shaanxi Provincial Special Support Program for Science and Technology Innovation Leader. Dr Bai was supported in part by China Postdoctoral Science Foundation Funded Project (2020M673349), and Open Research Fund from Shaanxi Key Laboratory of Complex System Control and Intelligent Information Processing (2020CP02).Peer reviewedPublisher PD

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Reduction of Nonlinear Intersubcarrier Intermixing in Coherent Optical OFDM by a Fast Newton-Based Support Vector Machine Nonlinear Equalizer

    Get PDF
    A fast Newton-based support vector machine (N-SVM) nonlinear equalizer (NLE) is experimentally demonstrated, for the first time, in 40 Gb/s 16-quadrature amplitude modulated coherent optical orthogonal frequency division multiplexing at 2000 km of transmission. It is shown that N-SVM-NLE extends the optimum launched optical power by 2 dB compared to the benchmark Volterra-based NLE. The performance improvement by N-SVM is due to its ability of tackling both deterministic fiber-induced nonlinear effects and the interaction between nonlinearities and stochastic noises (e.g., polarization-mode dispersion). An N-SVM is more tolerant to intersubcarrier nonlinear crosstalk effects than Volterra-based NLE, especially when applied across all subcarriers simultaneously. In contrast to the conventional SVM, the proposed algorithm is of reduced classifier complexity offering lower computational load and execution time. For a low C-parameter of 4 (a penalty parameter related to complexity), an execution time of 1.6 s is required for N-SVM to effectively mitigate nonlinearities. Compared to conventional SVM, the computational load of N-SVM is ∼6 times lower
    corecore