66 research outputs found

    Datacenter Design for Future Cloud Radio Access Network.

    Full text link
    Cloud radio access network (C-RAN), an emerging cloud service that combines the traditional radio access network (RAN) with cloud computing technology, has been proposed as a solution to handle the growing energy consumption and cost of the traditional RAN. Through aggregating baseband units (BBUs) in a centralized cloud datacenter, C-RAN reduces energy and cost, and improves wireless throughput and quality of service. However, designing a datacenter for C-RAN has not yet been studied. In this dissertation, I investigate how a datacenter for C-RAN BBUs should be built on commodity servers. I first design WiBench, an open-source benchmark suite containing the key signal processing kernels of many mainstream wireless protocols, and study its characteristics. The characterization study shows that there is abundant data level parallelism (DLP) and thread level parallelism (TLP). Based on this result, I then develop high performance software implementations of C-RAN BBU kernels in C++ and CUDA for both CPUs and GPUs. In addition, I generalize the GPU parallelization techniques of the Turbo decoder to the trellis algorithms, an important family of algorithms that are widely used in data compression and channel coding. Then I evaluate the performance of commodity CPU servers and GPU servers. The study shows that the datacenter with GPU servers can meet the LTE standard throughput with 4× to 16× fewer machines than with CPU servers. A further energy and cost analysis show that GPU servers can save on average 13× more energy and 6× more cost. Thus, I propose the C-RAN datacenter be built using GPUs as a server platform. Next I study resource management techniques to handle the temporal and spatial traffic imbalance in a C-RAN datacenter. I propose a “hill-climbing” power management that combines powering-off GPUs and DVFS to match the temporal C-RAN traffic pattern. Under a practical traffic model, this technique saves 40% of the BBU energy in a GPU-based C-RAN datacenter. For spatial traffic imbalance, I propose three workload distribution techniques to improve load balance and throughput. Among all three techniques, pipelining packets has the most throughput improvement at 10% and 16% for balanced and unbalanced loads, respectively.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120825/1/qizheng_1.pd

    Implementation of Communication Receivers as Multi-Processor Software

    Get PDF
    Over the years, we have seen changes in the mobile communication systems starting from Advanced Mobile Phone System (AMPS) to 3G Universal Mobile Telecommunications System (UMTS) and now to 4G Long Term Evolution (LTE) advanced. Also the mobile terminals have more features to offer comparatively when it comes to supported applications for example Wireless Local Area Network (WLAN), Global-Positioning System (GPS) and high speed multimedia applications. As the mobile terminals are now evolving towards multistandard systems, the traditional approach of designing radio platforms has now been replaced by more flexible and cost-effective solutions. The challenge imposed by this multistandard approach in the implementation of mobile terminals is to integrate several radio technologies into a single device. Sharing components and processing resources between different radio technologies is the key in the implementation of multistandard terminals. Software implementation of the components is preferred because of shorter lead-time of software development and it also costs less to carry out necessary redesigns with software. In an effort to take up this challenge, the designers proposed Software Defined Radio (SDR) that allows multiple protocols to work on a System-on-Chip (SoC). The SDR implementations can follow either the Multi-Processor System-on-Chip (MPSoC) or the Coarse-Grain Reconfigurable Array (CGRA) paradigm. For this thesis work, a homogeneous MPSoC platform is used to accelerate the signal processing baseband algorithms of WCDMA and OFDM IEEE 802.11a WLAN standards. The performance comparison between single core and multi-core platforms has been made based on the number of clock cycles consumed. The idea is to exploit the inherent parallelism offered by homogeneous MPSoC platform and improve the execution times of computationally intensive algorithms like correlation operation and Fast Fourier Transform (FFT). The baseband signal processing components have been implemented in software and executed on an MPSoC platform to evaluate their performance. The multiprocessor platform has been used in an asymmetric manner in which each processing node has its own copy of application software and uses shared memory space for multiprocessor communication. Each of the processing nodes fetches and executes instructions from its own local instruction memory and is therefore independent from each other. Data Level Parallelism (DLP) has been exploited in the software implementation of the algorithms by performing identical operations simultaneously on different processors

    Performance evaluation of OFDM based wireless communication systems using Graphics Processing Unit (GPU) based high performance computing

    Get PDF
    Wireless communication is one of the fastest developing technologies of current decade. Achieving high data rate under constrained condition demand sophisticated signal processing algorithms which in turn demand complex computational processing. Modern wireless communication techniques using OFDM demand substantial computational resources for implementation. An OFDM system with 2048 subcarriers typically requires a 2048 point IFFT for transmission and 2048 point FFT for reception. When signal processing techniques like PAPR, pre-equalization, equalization, pilot carrier insertion are implemented, the complexity increases considerably. This large complexity demands use of high performance computing systems for efficient implementation. This primary aim of this project was to take up this investigation. Rapid growth in computing and communications technology has led to the proliferation of powerful parallel and distributed computing paradigm leading to innovation in high performance computing and communications (HPCC). In this project, the performance of advanced wireless communication algorithms on Graphics Processing Unit (GPU) based high performance computing hardware has been evaluated. The computationally expensive multi-carrier wireless communication systems along with associated signal processing techniques have been implemented on GPU with an aim to reduce computation time. This project proposes the use of GPU architecture for efficient implementation of Long Term Evolution (LTE) Physical Layer, Multiple Input Multiple Output (MIMO) OFDM system and Partial Transmit sequence (PTS) technique for Peak-to-Average Power Ratio (PAPR) reduction in OFDM system. The implementation of this new method is expected to provide promising ways to implement complex wireless communication systems using GPU based computing hardware

    Spectrum control and iterative coding for high capacity multiband OFDM

    Get PDF
    The emergence of Multiband Orthogonal Frequency Division Modulation (MB-OFDM) as an ultra-wideband (UWB) technology injected new optimism in the market through realistic commercial implementation, while keeping promise of high data rates intact. However, it has also brought with it host of issues, some of which are addressed in this thesis. The thesis primarily focuses on the two issues of spectrum control and user capacity for the system currently proposed by the Multiband OFDM Alliance (MBOA). By showing that line spectra are still an issue for new modulation scheme (MB-OFDM), it proposes a mechanism of scrambling the data with an increased length linear feedback shift register (compared to the current proposal), a new set of seeds, and random phase reversion for the removal of line spectra. Following this, the thesis considers a technique for increasing the user capacity of the current MB-OFDM system to meet the needs of future wireless systems, through an adaptive multiuser synchronous coded transmission scheme. This involves real time iterative generation of user codes, which are generated over time and frequency leading to increased capacity. With the assumption of complete channel state information (CSI) at the receiver, an iterative MMSE algorithm is used which involves replacement of each users s signature with its normalized MMSE filter function allowing the overall Total Squared Correlation (TSC) of the system to decrease until the algorithm converges to a fixed set of signature vectors. This allows the system to be overloaded and user\u27s codes to be quasi-orthogonal. Simulation results show that for code of length nine (spread over three frequency bands and three time slots), ten users can be accommodated for a given QoS and with addition of single frequency sub-band which allows the code length to increase from nine to twelve (four frequency sub-bands and three time slots), fourteen users with nearly same QoS can be accommodated in the system. This communication is overlooked by a central controller with necessary functionalities to facilitate the process. The thesis essentially considers the uplink from transmitting devices to this central controller. Furthermore, analysis of this coded transmission in presence of interference is carried to display the robustness of this scheme through its adaptation by incorporating knowledge of existing Narrowband (NB) Interference for computing the codes. This allows operation of sub-band coexisting with NB interference without substantial degradation given reasonable interference energy (SIR=-l0dB and -5dB considered). Finally, the thesis looks at design implementation and convergence issues related to code vector generation whereby, use of Lanczos algorithm is considered for simpler design and faster convergence. The algorithm can be either used to simplify design implementation by providing simplified solution to Weiner Hopf equation (without requiring inverse of correlation matrix) over Krylov subspace or can be used to expedite convergence by updating the signature sequence with eigenvector corresponding to the least eigenvalue of the signature correlation matrix through reduced rank eigen subspace search

    Receiver algorithms that enable multi-mode baseband terminals

    Get PDF

    Physical layer authenticated image encryption for Iot network based on biometric chaotic signature for MPFrFT OFDM system

    Get PDF
    In this paper, a new physical layer authenticated encryption (PLAE) scheme based on the multi-parameter fractional Fourier transform–Orthogonal frequency division multiplexing (MP-FrFT-OFDM) is suggested for secure image transmission over the IoT network. In addition, a new robust multi-cascaded chaotic modular fractional sine map (MCC-MF sine map) is designed and analyzed. Also, a new dynamic chaotic biometric signature (DCBS) generator based on combining the biometric signature and the proposed MCC-MF sine map random chaotic sequence output is also designed. The final output of the proposed DCBS generator is used as a dynamic secret key for the MPFrFT OFDM system in which the encryption process is applied in the frequency domain. The proposed DCBS secret key generator generates a very large key space of (Formula presented.). The proposed DCBS secret keys generator can achieve the confidentiality and authentication properties. Statistical analysis, differential analysis and a key sensitivity test are performed to estimate the security strengths of the proposed DCBS-MP-FrFT-OFDM cryptosystem over the IoT network. The experimental results show that the proposed DCBS-MP-FrFT-OFDM cryptosystem is robust against common signal processing attacks and provides a high security level for image encryption application. © 2023 by the authors

    Wavelet Theory

    Get PDF
    The wavelet is a powerful mathematical tool that plays an important role in science and technology. This book looks at some of the most creative and popular applications of wavelets including biomedical signal processing, image processing, communication signal processing, Internet of Things (IoT), acoustical signal processing, financial market data analysis, energy and power management, and COVID-19 pandemic measurements and calculations. The editor’s personal interest is the application of wavelet transform to identify time domain changes on signals and corresponding frequency components and in improving power amplifier behavior
    corecore