16 research outputs found

    Review of Recent Trends

    Get PDF
    This work was partially supported by the European Regional Development Fund (FEDER), through the Regional Operational Programme of Centre (CENTRO 2020) of the Portugal 2020 framework, through projects SOCA (CENTRO-01-0145-FEDER-000010) and ORCIP (CENTRO-01-0145-FEDER-022141). Fernando P. Guiomar acknowledges a fellowship from “la Caixa” Foundation (ID100010434), code LCF/BQ/PR20/11770015. Houda Harkat acknowledges the financial support of the Programmatic Financing of the CTS R&D Unit (UIDP/00066/2020).MIMO-OFDM is a key technology and a strong candidate for 5G telecommunication systems. In the literature, there is no convenient survey study that rounds up all the necessary points to be investigated concerning such systems. The current deeper review paper inspects and interprets the state of the art and addresses several research axes related to MIMO-OFDM systems. Two topics have received special attention: MIMO waveforms and MIMO-OFDM channel estimation. The existing MIMO hardware and software innovations, in addition to the MIMO-OFDM equalization techniques, are discussed concisely. In the literature, only a few authors have discussed the MIMO channel estimation and modeling problems for a variety of MIMO systems. However, to the best of our knowledge, there has been until now no review paper specifically discussing the recent works concerning channel estimation and the equalization process for MIMO-OFDM systems. Hence, the current work focuses on analyzing the recently used algorithms in the field, which could be a rich reference for researchers. Moreover, some research perspectives are identified.publishersversionpublishe

    Datacenter Design for Future Cloud Radio Access Network.

    Full text link
    Cloud radio access network (C-RAN), an emerging cloud service that combines the traditional radio access network (RAN) with cloud computing technology, has been proposed as a solution to handle the growing energy consumption and cost of the traditional RAN. Through aggregating baseband units (BBUs) in a centralized cloud datacenter, C-RAN reduces energy and cost, and improves wireless throughput and quality of service. However, designing a datacenter for C-RAN has not yet been studied. In this dissertation, I investigate how a datacenter for C-RAN BBUs should be built on commodity servers. I first design WiBench, an open-source benchmark suite containing the key signal processing kernels of many mainstream wireless protocols, and study its characteristics. The characterization study shows that there is abundant data level parallelism (DLP) and thread level parallelism (TLP). Based on this result, I then develop high performance software implementations of C-RAN BBU kernels in C++ and CUDA for both CPUs and GPUs. In addition, I generalize the GPU parallelization techniques of the Turbo decoder to the trellis algorithms, an important family of algorithms that are widely used in data compression and channel coding. Then I evaluate the performance of commodity CPU servers and GPU servers. The study shows that the datacenter with GPU servers can meet the LTE standard throughput with 4× to 16× fewer machines than with CPU servers. A further energy and cost analysis show that GPU servers can save on average 13× more energy and 6× more cost. Thus, I propose the C-RAN datacenter be built using GPUs as a server platform. Next I study resource management techniques to handle the temporal and spatial traffic imbalance in a C-RAN datacenter. I propose a “hill-climbing” power management that combines powering-off GPUs and DVFS to match the temporal C-RAN traffic pattern. Under a practical traffic model, this technique saves 40% of the BBU energy in a GPU-based C-RAN datacenter. For spatial traffic imbalance, I propose three workload distribution techniques to improve load balance and throughput. Among all three techniques, pipelining packets has the most throughput improvement at 10% and 16% for balanced and unbalanced loads, respectively.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120825/1/qizheng_1.pd

    Real-Time Localization Using Software Defined Radio

    Get PDF
    Service providers make use of cost-effective wireless solutions to identify, localize, and possibly track users using their carried MDs to support added services, such as geo-advertisement, security, and management. Indoor and outdoor hotspot areas play a significant role for such services. However, GPS does not work in many of these areas. To solve this problem, service providers leverage available indoor radio technologies, such as WiFi, GSM, and LTE, to identify and localize users. We focus our research on passive services provided by third parties, which are responsible for (i) data acquisition and (ii) processing, and network-based services, where (i) and (ii) are done inside the serving network. For better understanding of parameters that affect indoor localization, we investigate several factors that affect indoor signal propagation for both Bluetooth and WiFi technologies. For GSM-based passive services, we developed first a data acquisition module: a GSM receiver that can overhear GSM uplink messages transmitted by MDs while being invisible. A set of optimizations were made for the receiver components to support wideband capturing of the GSM spectrum while operating in real-time. Processing the wide-spectrum of the GSM is possible using a proposed distributed processing approach over an IP network. Then, to overcome the lack of information about tracked devices’ radio settings, we developed two novel localization algorithms that rely on proximity-based solutions to estimate in real environments devices’ locations. Given the challenging indoor environment on radio signals, such as NLOS reception and multipath propagation, we developed an original algorithm to detect and remove contaminated radio signals before being fed to the localization algorithm. To improve the localization algorithm, we extended our work with a hybrid based approach that uses both WiFi and GSM interfaces to localize users. For network-based services, we used a software implementation of a LTE base station to develop our algorithms, which characterize the indoor environment before applying the localization algorithm. Experiments were conducted without any special hardware, any prior knowledge of the indoor layout or any offline calibration of the system

    Heterogeneous integration of optical wireless communications within next generation networks

    Full text link
    Unprecedented traffic growth is expected in future wireless networks and new technologies will be needed to satisfy demand. Optical wireless (OW) communication offers vast unused spectrum and high area spectral efficiency. In this work, optical cells are envisioned as supplementary access points within heterogeneous RF/OW networks. These networks opportunistically offload traffic to optical cells while utilizing the RF cell for highly mobile devices and devices that lack a reliable OW connection. Visible light communication (VLC) is considered as a potential OW technology due to the increasing adoption of solid state lighting for indoor illumination. Results of this work focus on a full system view of RF/OW HetNets with three primary areas of analysis. First, the need for network densication beyond current RF small cell implementations is evaluated. A media independent model is developed and results are presented that provide motivation for the adoption of hyper dense small cells as complementary components within multi-tier networks. Next, the relationships between RF and OW constraints and link characterization parameters are evaluated in order to define methods for fair comparison when user-centric channel selection criteria are used. RF and OW noise and interference characterization techniques are compared and common OW characterization models are demonstrated to show errors in excess of 100x when dominant interferers are present. Finally, dynamic characteristics of hyper dense OW networks are investigated in order to optimize traffic distribution from a network-centric perspective. A Kalman Filter model is presented to predict device motion for improved channel selection and a novel OW range expansion technique is presented that dynamically alters coverage regions of OW cells by 50%. In addition to analytical results, the dissertation describes two tools that have been created for evaluation of RF/OW HetNets. A communication and lighting simulation toolkit has been developed for modeling and evaluation of environments with VLC-enabled luminaires. The toolkit enhances an iterative site based impulse response simulator model to utilize GPU acceleration and achieves 10x speedup over the previous model. A software defined testbed for OW has also been proposed and applied. The testbed implements a VLC link and a heterogeneous RF/VLC connection that demonstrates the RF/OW HetNet concept as proof of concept

    Near Deterministic Signal Processing Using GPU, DPDK, and MKL

    Get PDF
    RÉSUMÉ En radio dĂ©fnie par logiciel, le traitement numcrique du signal impose le traitement en temps rĂ©el des donnĂ©s et des signaux. En outre, dans le dĂ©veloppement de systĂšmes de communication sans fil basĂ©es sur la norme dite Long Term Evolution (LTE), le temps rĂ©el et une faible latence des processus de calcul sont essentiels pour obtenir une bonne experience utilisateur. De plus, la latence des calculs est une clĂ© essentielle dans le traitement LTE, nous voulons explorer si des unitĂ©s de traitement graphique (GPU) peuvent ĂȘtre utilisĂ©es pour accĂ©lĂ©rer le traitement LTE. Dans ce but, nous explorons la technologie GPU de NVIDIA en utilisant le modĂ©le de programmation Compute Unified Device Architecture (CUDA) pour rĂ©duire le temps de calcul associĂ© au traitement LTE. Nous prĂ©sentons briĂ©vement l'architecture CUDA et le traitement parallĂ©le avec GPU sous Matlab, puis nous comparons les temps de calculs avec Matlab et CUDA. Nous concluons que CUDA et Matlab accĂ©lĂ©rent le temps de calcul des fonctions qui sont basĂ©es sur des algorithmes de traitement en parallĂ©le et qui ont le mĂȘme type de donnĂ©es, mais que cette accĂ©lĂ©ration est fortement variable en fonction de l'algorithme implantĂ©. Intel a proposĂ© une boite Ă  outil pour le dĂ©veloppement de plan de donnĂ©es (DPDK) pour faciliter le dĂ©veloppement des logiciels de haute performance pour le traitement des fonctionnalitĂ©s de tĂ©lĂ©communication. Dans ce projet, nous explorons son utilisation ainsi que celle de l'isolation du systĂšme d'exploitation pour rĂ©duire la variabilitĂ© des temps de calcul des processus de LTE. Plus prĂ©cisĂ©ment, nous utilisons DPDK avec la Math Kernel Library (MKL) pour calculer la transformĂ©e de Fourier rapide (FFT) associĂ©e avec le processus LTE et nous mesurons leur temps de calcul. Nous Ă©valuons quatre cas: 1) code FFT dans le cƓur esclave sans isolation du CPU, 2) code FFT dans le cƓur esclave avec l'isolation du CPU, 3) code FFT utilisant MKL sans DPDK et 4) code FFT de base. Nous combinons DPDK et MKL pour les cas 1 et 2 et Ă©valuons quel cas est plus dĂ©terministe et rĂ©duit le plus la latence des processus LTE. Nous montrons que le temps de calcul moyen pour la FFT de base est environ 100 fois plus grand alors que l'Ă©cart-type est environ 20 fois plus Ă©levĂ©. On constate que MKL offre d'excellentes performances, mais comme il n'est pas extensible par lui-mĂȘme dans le domaine infonuagique, le combiner avec DPDK est une alternative trĂšs prometteuse. DPDK permet d'amĂ©liorer la performance, la gestion de la mĂ©moire et rend MKL Ă©volutif.----------ABSTRACT In software defined radio, digital signal processing requires strict real time processing of data and signals. Specifically, in the development of the Long Term Evolution (LTE) standard, real time and low latency of computation processes are essential to obtain good user experience. As low latency computation is critical in real time processing of LTE, we explore the possibility of using Graphics Processing Units (GPUs) to accelerate its functions. As the first contribution of this thesis, we adopt NVIDIA GPU technology using the Compute Unified Device Architecture (CUDA) programming model in order to reduce the computation times of LTE. Furthermore, we investigate the efficiency of using MATLAB for parallel computing on GPUs. This allows us to evaluate MATLAB and CUDA programming paradigms and provide a comprehensive comparison between them for parallel computing of LTE processes on GPUs. We conclude that CUDA and Matlab accelerate processing of structured basic algorithms but that acceleration is variable and depends which algorithm is involved. Intel has proposed its Data Plane Development Kit (DPDK) as a tool to develop high performance software for processing of telecommunication data. As the second contribution of this thesis, we explore the possibility of using DPDK and isolation of operating system to reduce the variability of the computation times of LTE processes. Specifically, we use DPDK along with the Math Kernel Library (MKL) provided by Intel to calculate Fast Fourier Transforms (FFT) associated with LTE processes and measure their computation times. We study the computation times in different scenarios where FFT calculation is done with and without the isolation of processing units along the use of DPDK. Our experimental analysis shows that when DPDK and MKL are simultaneously used and the processing units are isolated, the resulting processing times of FFT calculation are reduced and have a near-deterministic characteristic. Explicitly, using DPDK and MKL along with the isolation of processing units reduces the mean and standard deviation of processing times for FFT calculation by 100 times and 20 times, respectively. Moreover, we conclude that although MKL reduces the computation time of FFTs, it does not offer a scalable solution but combining it with DPDK is a promising avenue

    TRANSMISSION PERFORMANCE OPTIMIZATION IN FIBER-WIRELESS ACCESS NETWORKS USING MACHINE LEARNING TECHNIQUES

    Get PDF
    The objective of this dissertation is to enhance the transmission performance in the fiber-wireless access network through mitigating the vital system limitations of both analog radio over fiber (A-RoF) and digital radio over fiber (D-RoF), with machine learning techniques being systematically implemented. The first thrust is improving the spectral efficiency for the optical transmission in the D-RoF to support the delivery of the massive number of bits from digitized radio signals. Advanced digital modulation schemes like PAM8, discrete multi-tone (DMT), and probabilistic shaping are investigated and implemented, while they may introduce severe nonlinear impairments on the low-cost optical intensity-modulation-direct-detection (IMDD) based D-RoF link with a limited dynamic range. An efficient deep neural network (DNN) equalizer/decoder to mitigate the nonlinear degradation is therefore designed and experimentally verified. Besides, we design a neural network based digital predistortion (DPD) to mitigate the nonlinear impairments from the whole link, which can be integrated into a transmitter with more processing resources and power than a receiver in an access network. Another thrust is to proactively mitigate the complex interferences in radio access networks (RANs). The composition of signals from different licensed systems and unlicensed transmitters creates an unprecedently complex interference environment that cannot be solved by conventional pre-defined network planning. In response to the challenges, a proactive interference avoidance scheme using reinforcement learning is proposed and experimentally verified in a mmWave-over-fiber platform. Except for the external sources, the interference may arise internally from a local transmitter as the self-interference (SI) that occupies the same time and frequency block as the signal of interest (SOI). Different from the conventional subtraction-based SI cancellation scheme, we design an efficient dual-inputs DNN (DI-DNN) based canceller which simultaneously cancels the SI and recovers the SOI.Ph.D

    Software Defined Radio Solutions for Wireless Communications Systems

    Get PDF
    Wireless technologies have been advancing rapidly, especially in the recent years. Design, implementation, and manufacturing of devices supporting the continuously evolving technologies require great efforts. Thus, building platforms compatible with different generations of standards and technologies has gained a lot of interest. As a result, software deïŹned radios (SDRs) are investigated to offer more ïŹ‚exibility and scalability, and reduce the design efforts, compared to the conventional ïŹxed-function hardware-based solutions.This thesis mainly addresses the challenges related to SDR-based implementation of today’s wireless devices. One of the main targets of most of the wireless standards has been to improve the achievable data rates, which imposes strict requirements on the processing platforms. Realizing real-time processing of high throughput signal processing algorithms using SDR-based platforms while maintaining energy consumption close to conventional approaches is a challenging topic that is addressed in this thesis.Firstly, this thesis concentrates on the challenges of a real-time software-based implementation for the very high throughput (VHT) Institute of Electrical and Electronics Engineers (IEEE) 802.11ac amendment from the wireless local area networks (WLAN) family, where an SDR-based solution is introduced for the frequency-domain baseband processing of a multiple-input multipleoutput (MIMO) transmitter and receiver. The feasibility of the implementation is evaluated with respect to the number of clock cycles and the consumed power. Furthermore, a digital front-end (DFE) concept is developed for the IEEE 802.11ac receiver, where the 80 MHz waveform is divided to two 40 MHz signals. This is carried out through time-domain digital ïŹltering and decimation, which is challenging due to the latency and cyclic preïŹx (CP) budget of the receiver. Different multi-rate channelization architectures are developed, and the software implementation is presented and evaluated in terms of execution time, number of clock cycles, power, and energy consumption on different multi-core platforms.Secondly, this thesis addresses selected advanced techniques developed to realize inband fullduplex (IBFD) systems, which aim at improving spectral efïŹciency in today’s congested radio spectrum. IBFD refers to concurrent transmission and reception on the same frequency band, where the main challenge to combat is the strong self-interference (SI). In this thesis, an SDRbased solution is introduced, which is capable of real-time mitigation of the SI signal. The implementation results show possibility of achieving real-time sufïŹcient SI suppression under time-varying environments using low-power, mobile-scale multi-core processing platforms. To investigate the challenges associated with SDR implementations for mobile-scale devices with limited processing and power resources, processing platforms suitable for hand-held devices are selected in this thesis work. On the baseband processing side, a very long instruction word (VLIW) processor, optimized for wireless communication applications, is utilized. Furthermore, in the solutions presented for the DFE processing and the digital SI canceller, commercial off-the-shelf (COTS) multi-core central processing units (CPUs) and graphics processing units (GPUs) are used with the aim of investigating the performance enhancement achieved by utilizing parallel processing.Overall, this thesis provides solutions to the challenges of low-power, and real-time software-based implementation of computationally intensive signal processing algorithms for the current and future communications systems

    Multi-core architectures with coarse-grained dynamically reconfigurable processors for broadband wireless access technologies

    Get PDF
    Broadband Wireless Access technologies have significant market potential, especially the WiMAX protocol which can deliver data rates of tens of Mbps. Strong demand for high performance WiMAX solutions is forcing designers to seek help from multi-core processors that offer competitive advantages in terms of all performance metrics, such as speed, power and area. Through the provision of a degree of flexibility similar to that of a DSP and performance and power consumption advantages approaching that of an ASIC, coarse-grained dynamically reconfigurable processors are proving to be strong candidates for processing cores used in future high performance multi-core processor systems. This thesis investigates multi-core architectures with a newly emerging dynamically reconfigurable processor – RICA, targeting WiMAX physical layer applications. A novel master-slave multi-core architecture is proposed, using RICA processing cores. A SystemC based simulator, called MRPSIM, is devised to model this multi-core architecture. This simulator provides fast simulation speed and timing accuracy, offers flexible architectural options to configure the multi-core architecture, and enables the analysis and investigation of multi-core architectures. Meanwhile a profiling-driven mapping methodology is developed to partition the WiMAX application into multiple tasks as well as schedule and map these tasks onto the multi-core architecture, aiming to reduce the overall system execution time. Both the MRPSIM simulator and the mapping methodology are seamlessly integrated with the existing RICA tool flow. Based on the proposed master-slave multi-core architecture, a series of diverse homogeneous and heterogeneous multi-core solutions are designed for different fixed WiMAX physical layer profiles. Implemented in ANSI C and executed on the MRPSIM simulator, these multi-core solutions contain different numbers of cores, combine various memory architectures and task partitioning schemes, and deliver high throughputs at relatively low area costs. Meanwhile a design space exploration methodology is developed to search the design space for multi-core systems to find suitable solutions under certain system constraints. Finally, laying a foundation for future multithreading exploration on the proposed multi-core architecture, this thesis investigates the porting of a real-time operating system – Micro C/OS-II to a single RICA processor. A multitasking version of WiMAX is implemented on a single RICA processor with the operating system support

    Spectrum Optimisation in Wireless Communication Systems: Technology Evaluation, System Design and Practical Implementation

    Get PDF
    Two key technology enablers for next generation networks are examined in this thesis, namely Cognitive Radio (CR) and Spectrally Efficient Frequency Division Multiplexing (SEFDM). The first part proposes the use of traffic prediction in CR systems to improve the Quality of Service (QoS) for CR users. A framework is presented which allows CR users to capture a frequency slot in an idle licensed channel occupied by primary users. This is achieved by using CR to sense and select target spectrum bands combined with traffic prediction to determine the optimum channel-sensing order. The latter part of this thesis considers the design, practical implementation and performance evaluation of SEFDM. The key challenge that arises in SEFDM is the self-created interference which complicates the design of receiver architectures. Previous work has focused on the development of sophisticated detection algorithms, however, these suffer from an impractical computational complexity. Consequently, the aim of this work is two-fold; first, to reduce the complexity of existing algorithms to make them better-suited for application in the real world; second, to develop hardware prototypes to assess the feasibility of employing SEFDM in practical systems. The impact of oversampling and fixed-point effects on the performance of SEFDM is initially determined, followed by the design and implementation of linear detection techniques using Field Programmable Gate Arrays (FPGAs). The performance of these FPGA based linear receivers is evaluated in terms of throughput, resource utilisation and Bit Error Rate (BER). Finally, variants of the Sphere Decoding (SD) algorithm are investigated to ameliorate the error performance of SEFDM systems with targeted reduction in complexity. The Fixed SD (FSD) algorithm is implemented on a Digital Signal Processor (DSP) to measure its computational complexity. Modified sorting and decomposition strategies are then applied to this FSD algorithm offering trade-offs between execution speed and BER

    Synchronization algorithms and architectures for wireless OFDM systems

    Get PDF
    Orthogonal frequency division multiplexing (OFDM) is a multicarrier modulation technique that has become a viable method for wireless communication systems due to the high spectral efficiency, immunity to multipath distortion, and being flexible to integrate with other techniques. However, the high-peak-to-average power ratio and sensitivity to synchronization errors are the major drawbacks for OFDM systems. The algorithms and architectures for symbol timing and frequency synchronization have been addressed in this thesis because of their critical requirements in the development and implementation of wireless OFDM systems. For the frequency synchronization, two efficient carrier frequency offset (CFO) estimation methods based on the power and phase difference measurements between the subcarriers in consecutive OFDM symbols have been presented and the power difference measurement technique is mapped onto reconfigurable hardware architecture. The performance of the considered CFO estimators is investigated in the presence of timing uncertainty conditions. The power difference measurements approach is further investigated for timing synchronization in OFDM systems with constant modulus constellation. A new symbol timing estimator has been proposed by measuring the power difference either between adjacent subcarriers or the same subcarrier in consecutive OFDM symbols. The proposed timing metric has been realized in feedforward and feedback configurations, and different implementation strategies have been considered to enhance the performance and reduce the complexity. Recently, multiple-input multiple-output (MIMO) wireless communication systems have received considerable attention. Therefore, the proposed algorithms have also been extended for timing recovery and frequency synchronization in MIMO-OFDM systems. Unlike other techniques, the proposed timing and frequency synchronization architectures are totally blind in the sense that they do not require any information about the transmitted data, the channel state or the signal-to-noise-ratio (SNR). The proposed frequency synchronization architecture has low complexity because it can be implemented efficiently using the three points parameter estimation approach. The simulation results confirmed that the proposed algorithms provide accurate estimates for the synchronization parameters using a short observation window. In addition, the proposed synchronization techniques have demonstrated robust performance over frequency selective fading channels that significantly outperform other well-established methods which will in turn benefit the overall OFDM system performance. Furthermore, an architectural exploration for mapping the proposed frequency synchronization algorithm, in particular the CFO estimation based on the power difference measurements, on reconfigurable computing architecture has been investigated. The proposed reconfigurable parallel and multiplexed-stream architectures with different implementation alternatives have been simulated, verified and compared for field programmable gate array (FPGA) implementation using the Xilinx’s DSP design flow.EThOS - Electronic Theses Online ServiceMinistry of Higher Education and Scientific Research (MOHSR) of IraqGBUnited Kingdo
    corecore