69 research outputs found

    Estimation and detection techniques for doubly-selective channels in wireless communications

    Get PDF
    A fundamental problem in communications is the estimation of the channel. The signal transmitted through a communications channel undergoes distortions so that it is often received in an unrecognizable form at the receiver. The receiver must expend significant signal processing effort in order to be able to decode the transmit signal from this received signal. This signal processing requires knowledge of how the channel distorts the transmit signal, i.e. channel knowledge. To maintain a reliable link, the channel must be estimated and tracked by the receiver. The estimation of the channel at the receiver often proceeds by transmission of a signal called the 'pilot' which is known a priori to the receiver. The receiver forms its estimate of the transmitted signal based on how this known signal is distorted by the channel, i.e. it estimates the channel from the received signal and the pilot. This design of the pilot is a function of the modulation, the type of training and the channel. [Continues.

    Single-Frequency Network Terrestrial Broadcasting with 5GNR Numerology

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Timing and Carrier Synchronization in Wireless Communication Systems: A Survey and Classification of Research in the Last 5 Years

    Get PDF
    Timing and carrier synchronization is a fundamental requirement for any wireless communication system to work properly. Timing synchronization is the process by which a receiver node determines the correct instants of time at which to sample the incoming signal. Carrier synchronization is the process by which a receiver adapts the frequency and phase of its local carrier oscillator with those of the received signal. In this paper, we survey the literature over the last 5 years (2010–2014) and present a comprehensive literature review and classification of the recent research progress in achieving timing and carrier synchronization in single-input single-output (SISO), multiple-input multiple-output (MIMO), cooperative relaying, and multiuser/multicell interference networks. Considering both single-carrier and multi-carrier communication systems, we survey and categorize the timing and carrier synchronization techniques proposed for the different communication systems focusing on the system model assumptions for synchronization, the synchronization challenges, and the state-of-the-art synchronization solutions and their limitations. Finally, we envision some future research directions

    Channelization, Link Adaptation and Multi-antenna Techniques for OFDM(A) Based Wireless Systems

    Get PDF

    Evaluation of Beamforming Algorithms for Massive MIMO

    Get PDF
    Massive MIMO relay system is an expansion of the Multiple-Input-Multiple-Output (MIMO) which enabled multiple users and antennas to communicate with each other for data sharing. A relay system with multiple antenna system has an advantage over simple MIMO system as it interconnects base station and users with each other for sharing of information and both BS and users are independent of many antennas. High data rate applications such as Machine-to-Machine communication and wireless sensor networks are experiencing transmit power loss, channel capacity and mismanagement of data. The demand for the Massive MIMO relay system is opening a door for ultra-high latency wireless network applications in case of saving transmit power and transmission of accurate information over the wireless networks. Due to the loss in transmit power and mismanagement of information over wireless networks, it is difficult to get better performance. Different approaches were made to optimize the overall transmit power of communication systems. One of the approaches was explained in this thesis work. The focus of the thesis is the use of beamforming algorithms named as Maximum Ratio Combining (MRC) and Zero-Forcing (ZF) to maximize the overall capacity of the MIMO system. These algorithms were evaluated on different scenarios to handle the performance and behavior with different network conditions. Various use cases were used for analyzing the beamforming algorithms. The performance of both algorithms was observed by considering the scenarios such as varying the transmit and receive antenna’s size and modulation schemes. Singular Value Decomposition (SVD) Method was used at the main MIMO channel to optimize the channel capacity. SVD divides the MIMO channel into different subchannels and optimizes the channel capacity of individual channels. The summary of results showed that MRC and ZF in the CP-OFDM environment when the number of RX antennas increased then they gave better BER performance as compared to the single antenna system. On the other hand, with higher modulation schemes efficiency was not good but with lower modulation scheme performance was satisfactory

    Modeling and Digital Mitigation of Transmitter Imperfections in Radio Communication Systems

    Get PDF
    To satisfy the continuously growing demands for higher data rates, modern radio communication systems employ larger bandwidths and more complex waveforms. Furthermore, radio devices are expected to support a rich mixture of standards such as cellular networks, wireless local-area networks, wireless personal area networks, positioning and navigation systems, etc. In general, a "smart'' device should be flexible to support all these requirements while being portable, cheap, and energy efficient. These seemingly conflicting expectations impose stringent radio frequency (RF) design challenges which, in turn, call for their proper understanding as well as developing cost-effective solutions to address them. The direct-conversion transceiver architecture is an appealing analog front-end for flexible and multi-standard radio systems. However, it is sensitive to various circuit impairments, and modern communication systems based on multi-carrier waveforms such as Orthogonal Frequency Division Multiplexing (OFDM) and Orthogonal Frequency Division Multiple Access (OFDMA) are particularly vulnerable to RF front-end non-idealities.This thesis addresses the modeling and digital mitigation of selected transmitter (TX) RF impairments in radio communication devices. The contributions can be divided into two areas. First, new modeling and digital mitigation techniques are proposed for two essential front-end impairments in direct-conversion architecture-based OFDM and OFDMA systems, namely inphase and quadrature phase (I/Q) imbalance and carrier frequency offset (CFO). Both joint and de-coupled estimation and compensation schemes for frequency-selective TX I/Q imbalance and channel distortions are proposed for OFDM systems, to be adopted on the receiver side. Then, in the context of uplink OFDMA and Single Carrier FDMA (SC-FDMA), which are the air interface technologies of the 3rd Generation Partnership Project (3GPP) Long Term Evolution (LTE) and LTE-Advanced systems, joint estimation and equalization techniques of RF impairments and channel distortions are proposed. Here, the challenging multi-user uplink scenario with unequal received power levels is investigated where I/Q imbalance causes inter-user interference. A joint mirror subcarrier processing-based minimum mean-square error (MMSE) equalizer with an arbitrary number of receiver antennas is formulated to effectively handle the mirror sub-band users of different power levels. Furthermore, the joint channel and impairments filter responses are efficiently approximated with polynomial-based basis function models, and the parameters of basis functions are estimated with the reference signals conforming to the LTE uplink sub-frame structure. The resulting receiver concept adopting the proposed techniques enables improved link performance without modifying the design of RF transceivers.Second, digital baseband mitigation solutions are developed for the TX leakage signal-induced self-interference in frequency division duplex (FDD) transceivers. In FDD transceivers, a duplexer is used to connect the TX and receiver (RX) chains to a common antenna while also providing isolation to the receiver chain against the powerful transmit signal. In general, the continuous miniaturization of hardware and adoption of larger bandwidths through carrier aggregation type noncontiguous allocations complicates achieving sufficient TX-RX isolation. Here, two different effects of the transmitter leakage signal are investigated. The first is TX out-of-band (OOB) emissions and TX spurious emissions at own receiver band, due to the transmitter nonlinearity, and the second is nonlinearity of down-converter in the RX that generates second-order intermodulation distortion (IMD2) due to the TX in-band leakage signal. This work shows that the transmitter leakage signal-induced interference depends on an equivalent leakage channel that models the TX path non-idealities, duplexer filter responses, and the RX path non-idealities. The work proposes algorithms that operate in the digital baseband of the transceiver to estimate the TX-RX non-idealities and the duplexer filter responses, and subsequently regenerating and canceling the self-interference, thereby potentially relaxing the TX-RX isolation requirements as well as increasing the transceiver flexibility.Overall, this thesis provides useful signal models to understand the implications of different RF non-idealities and proposes compensation solutions to cope with certain RF impairments. This is complemented with extensive computer simulations and practical RF measurements to validate their application in real-world radio transceivers

    Performance enhancement for LTE and beyond systems

    Get PDF
    A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of PhilosophyWireless communication systems have undergone fast development in recent years. Based on GSM/EDGE and UMTS/HSPA, the 3rd Generation Partnership Project (3GPP) specified the Long Term Evolution (LTE) standard to cope with rapidly increasing demands, including capacity, coverage, and data rate. To achieve this goal, several key techniques have been adopted by LTE, such as Multiple-Input and Multiple-Output (MIMO), Orthogonal Frequency-Division Multiplexing (OFDM), and heterogeneous network (HetNet). However, there are some inherent drawbacks regarding these techniques. Direct conversion architecture is adopted to provide a simple, low cost transmitter solution. The problem of I/Q imbalance arises due to the imperfection of circuit components; the orthogonality of OFDM is vulnerable to carrier frequency offset (CFO) and sampling frequency offset (SFO). The doubly selective channel can also severely deteriorate the receiver performance. In addition, the deployment of Heterogeneous Network (HetNet), which permits the co-existence of macro and pico cells, incurs inter-cell interference for cell edge users. The impact of these factors then results in significant degradation in relation to system performance. This dissertation aims to investigate the key techniques which can be used to mitigate the above problems. First, I/Q imbalance for the wideband transmitter is studied and a self-IQ-demodulation based compensation scheme for frequencydependent (FD) I/Q imbalance is proposed. This combats the FD I/Q imbalance by using the internal diode of the transmitter and a specially designed test signal without any external calibration instruments or internal low-IF feedback path. The instrument test results show that the proposed scheme can enhance signal quality by 10 dB in terms of image rejection ratio (IRR). In addition to the I/Q imbalance, the system suffers from CFO, SFO and frequency-time selective channel. To mitigate this, a hybrid optimum OFDM receiver with decision feedback equalizer (DFE) to cope with the CFO, SFO and doubly selective channel. The algorithm firstly estimates the CFO and channel frequency response (CFR) in the coarse estimation, with the help of hybrid classical timing and frequency synchronization algorithms. Afterwards, a pilot-aided polynomial interpolation channel estimation, combined with a low complexity DFE scheme, based on minimum mean squared error (MMSE) criteria, is developed to alleviate the impact of the residual SFO, CFO, and Doppler effect. A subspace-based signal-to-noise ratio (SNR) estimation algorithm is proposed to estimate the SNR in the doubly selective channel. This provides prior knowledge for MMSE-DFE and automatic modulation and coding (AMC). Simulation results show that this proposed estimation algorithm significantly improves the system performance. In order to speed up algorithm verification process, an FPGA based co-simulation is developed. Inter-cell interference caused by the co-existence of macro and pico cells has a big impact on system performance. Although an almost blank subframe (ABS) is proposed to mitigate this problem, the residual control signal in the ABS still inevitably causes interference. Hence, a cell-specific reference signal (CRS) interference cancellation algorithm, utilizing the information in the ABS, is proposed. First, the timing and carrier frequency offset of the interference signal is compensated by utilizing the cross-correlation properties of the synchronization signal. Afterwards, the reference signal is generated locally and channel response is estimated by making use of channel statistics. Then, the interference signal is reconstructed based on the previous estimate of the channel, timing and carrier frequency offset. The interference is mitigated by subtracting the estimation of the interference signal and LLR puncturing. The block error rate (BLER) performance of the signal is notably improved by this algorithm, according to the simulation results of different channel scenarios. The proposed techniques provide low cost, low complexity solutions for LTE and beyond systems. The simulation and measurements show good overall system performance can be achieved
    • …
    corecore