22 research outputs found

    Superimposed training for single carrier transmission in future mobile communications

    Get PDF
    The amount of wireless devices and wireless traffic has been increasing exponentially for the last ten years. It is forecasted that the exponential growth will continue without saturation till 2020 and probably further. So far, network vendors and operators have tackled the problem by introducing new evolutions of cellular macro networks, where each evolution has increased the physical layer spectral efficiency. Unfortunately, the spectral efficiency of the physical layer is achieving the Shannon-Hartley limit and does not provide much room for improvement anymore. However, considering the overhead due to synchronization and channel estimation reference symbols in the context of physical layer spectral efficiency, we believe that there is room for improvement. In this thesis, we will study the potentiality of superimposed training methods, especially data-dependent superimposed training, to boost the spectral efficiency of wideband single carrier communications even further. The main idea is that with superimposed training we can transmit more data symbols in the same time duration as compared to traditional time domain multiplexed training. In theory, more data symbols means more data bits which indicates higher throughput for the end user. In practice, nothing is free. With superimposed training we encounter self-interference between the training signal and the data signal. Therefore, we have to look for iterative receiver structures to separate these two or to estimate both, the desired data signal and the interfering component. In this thesis, we initiate the studies to find out if we truly can improve the existing systems by introducing the superimposed training scheme. We show that in certain scenarios we can achieve higher spectral efficiency, which maps directly to higher user throughput, but with the cost of higher signal processing burden in the receiver. In addition, we provide analytical tools for estimating the symbol or bit error ratio in the receiver with a given parametrization. The discussion leads us to the conclusion that there still remains several open topics for further study when looking for new ways of optimizing the overhead of reference symbols in wireless communications. Superimposed training with data-dependent components may prove to provide extra throughput gain. Furthermore, the superimposed component may be used for, e.g., improved synchronization, low bit-rate signaling or continuous tracking of neighbor cells. We believe that the current systems could be improved by using the superimposed training collectively with time domain multiplexed training

    Performance enhancement for LTE and beyond systems

    Get PDF
    A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of PhilosophyWireless communication systems have undergone fast development in recent years. Based on GSM/EDGE and UMTS/HSPA, the 3rd Generation Partnership Project (3GPP) specified the Long Term Evolution (LTE) standard to cope with rapidly increasing demands, including capacity, coverage, and data rate. To achieve this goal, several key techniques have been adopted by LTE, such as Multiple-Input and Multiple-Output (MIMO), Orthogonal Frequency-Division Multiplexing (OFDM), and heterogeneous network (HetNet). However, there are some inherent drawbacks regarding these techniques. Direct conversion architecture is adopted to provide a simple, low cost transmitter solution. The problem of I/Q imbalance arises due to the imperfection of circuit components; the orthogonality of OFDM is vulnerable to carrier frequency offset (CFO) and sampling frequency offset (SFO). The doubly selective channel can also severely deteriorate the receiver performance. In addition, the deployment of Heterogeneous Network (HetNet), which permits the co-existence of macro and pico cells, incurs inter-cell interference for cell edge users. The impact of these factors then results in significant degradation in relation to system performance. This dissertation aims to investigate the key techniques which can be used to mitigate the above problems. First, I/Q imbalance for the wideband transmitter is studied and a self-IQ-demodulation based compensation scheme for frequencydependent (FD) I/Q imbalance is proposed. This combats the FD I/Q imbalance by using the internal diode of the transmitter and a specially designed test signal without any external calibration instruments or internal low-IF feedback path. The instrument test results show that the proposed scheme can enhance signal quality by 10 dB in terms of image rejection ratio (IRR). In addition to the I/Q imbalance, the system suffers from CFO, SFO and frequency-time selective channel. To mitigate this, a hybrid optimum OFDM receiver with decision feedback equalizer (DFE) to cope with the CFO, SFO and doubly selective channel. The algorithm firstly estimates the CFO and channel frequency response (CFR) in the coarse estimation, with the help of hybrid classical timing and frequency synchronization algorithms. Afterwards, a pilot-aided polynomial interpolation channel estimation, combined with a low complexity DFE scheme, based on minimum mean squared error (MMSE) criteria, is developed to alleviate the impact of the residual SFO, CFO, and Doppler effect. A subspace-based signal-to-noise ratio (SNR) estimation algorithm is proposed to estimate the SNR in the doubly selective channel. This provides prior knowledge for MMSE-DFE and automatic modulation and coding (AMC). Simulation results show that this proposed estimation algorithm significantly improves the system performance. In order to speed up algorithm verification process, an FPGA based co-simulation is developed. Inter-cell interference caused by the co-existence of macro and pico cells has a big impact on system performance. Although an almost blank subframe (ABS) is proposed to mitigate this problem, the residual control signal in the ABS still inevitably causes interference. Hence, a cell-specific reference signal (CRS) interference cancellation algorithm, utilizing the information in the ABS, is proposed. First, the timing and carrier frequency offset of the interference signal is compensated by utilizing the cross-correlation properties of the synchronization signal. Afterwards, the reference signal is generated locally and channel response is estimated by making use of channel statistics. Then, the interference signal is reconstructed based on the previous estimate of the channel, timing and carrier frequency offset. The interference is mitigated by subtracting the estimation of the interference signal and LLR puncturing. The block error rate (BLER) performance of the signal is notably improved by this algorithm, according to the simulation results of different channel scenarios. The proposed techniques provide low cost, low complexity solutions for LTE and beyond systems. The simulation and measurements show good overall system performance can be achieved

    Iterative Detection for Overloaded Multiuser MIMO OFDM Systems

    Get PDF
    Inspired by multiuser detection (MUD) and the ‘Turbo principle’, this thesis deals with iterative interference cancellation (IIC) in overloaded multiuser multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems. Linear detection schemes, such as zero forcing (ZF) and minimum mean square error (MMSE) cannot be used for the overloaded system because of the rank deficiency of channel matrix, while the optimal approach, the maximum likelihood (ML) detection has high computational complexity. In this thesis, an iterative interference cancellation (IIC) multiuser detection scheme with matched filter and convolutional codes is considered. The main idea of this combination is a low complexity receiver. Parallel interference cancellation (PIC) is employed to improve the multiuser receiver performance for overloaded systems. A log-likelihood ratio (LLR) converter is proposed to further improve the reliability of the soft value converted from the output of the matched filter. Simulation results show that the bit error rate (BER) performance of this method is close to the optimal approach for a two user system. However, for the four user or more user system, it has an error floor of the BER performance. For this case, a channel selection scheme is proposed to distinguish whether the channel is good or bad by using the mutual information based on the extrinsic information transfer (EXIT) chart. The mutual information can be predicted in a look-up table which greatly reduces the complexity. For those ‘bad’ channels identified by the channel selection, we introduce two adaptive transmission methods to deal with such channels: one uses a lower code rate, and the other is multiple transmissions. The use of an IIC receiver with the interleave-division multiple access (IDMA) to further improve the BER performance without any channel selection is also investigated. It has been shown that this approach can remove the error floor. Finally, the influence of channel accuracy on the IIC is investigated. Pilot-based Wiener filter channel estimation is used to test and verify how much the IIC is influenced by the channel accuracy

    Robust wireless sensor network for smart grid communication : modeling and performance evaluation

    Get PDF
    Our planet is gradually heading towards an energy famine due to growing population and industrialization. Hence, increasing electricity consumption and prices, diminishing fossil fuels and lack significantly in environment-friendliness due to their emission of greenhouse gasses, and inefficient usage of existing energy supplies have caused serious network congestion problems in many countries in recent years. In addition to this overstressed situation, nowadays, the electric power system is facing many challenges, such as high maintenance cost, aging equipment, lack of effective fault diagnostics, power supply reliability, etc., which further increase the possibility of system breakdown. Furthermore, the adaptation of the new renewable energy sources with the existing power plants to provide an alternative way for electricity production transformed it in a very large and complex scale, which increases new issues. To address these challenges, a new concept of next generation electric power system, called the "smart grid", has emerged in which Information and Communication Technologies (ICTs) are playing the key role. For a reliable smart grid, monitoring and control of power system parameters in the transmission and distribution segments are crucial. This necessitates the deployment of a robust communication network within the power grid. Traditionally, power grid communications are realized through wired communications, including power line communication (PLC). However, the cost of its installation might be expensive especially for remote control and monitoring applications. More recently, plenty of research interests have been drawn to the wireless communications for smart grid applications. In this regard, the most promising methods of smart grid monitoring explored in the literature is based on wireless sensor network (WSN). Indeed, the collaborative nature of WSN brings significant advantages over the traditional wireless networks, including low-cost, wider coverage, self-organization, and rapid deployment. Unfortunately, harsh and hostile electric power system environments pose great challenges in the reliability of sensor node communications because of strong RF interference and noise called impulsive noise. On account of the fundamental of WSN-based smart grid communications and the possible impacts of impulsive noise on the reliability of sensor node communications, this dissertation is supposed to further fill the lacking of the existing research outcomes. To be specific, the contributions of this dissertation can be summarized as three fold: (i) investigation and performance analysis of impulsive noise mitigation techniques for point-to-point single-carrier communication systems impaired by bursty impulsive noise; (ii) design and performance analysis of collaborative WSN for smart grid communication by considering the RF noise model in the designing process, a particular intension is given to how the time-correlation among the noise samples can be taken into account; (iii) optimal minimum mean square error (MMSE)estimation of physical phenomenon like temperature, current, voltage, etc., typically modeled by a Gaussian source in the presence of impulsive noise. In the first part, we compare and analyze the widely used non-linear methods such as clipping, blanking, and combined clipping-blanking to mitigate the noxious effects of bursty impulsive noise for point-to-point communication systems with low-density parity-check (LDPC) coded single-carrier transmission. While, the performance of these mitigation techniques are widely investigated for multi-carrier communication systems using orthogonal frequency division multiplexing (OFDM) transmission under the effect of memoryless impulsive noise, we note that OFDM is outperformed by its single-carrier counterpart when the impulses are very strong and/or they occur frequently, which likely exists in contemporary communication systems including smart grid communications. Likewise, the assumption of memoryless noise model is not valid for many communication scenarios. Moreover, we propose log-likelihood ratio (LLR)-based impulsive noise mitigation for the considered scenario. We show that the memory property of the noise can be exploited in the LLR calculation through maximum a posteriori (MAP) detection. In this context, provided simulation results highlight the superiority of the LLR-based mitigation scheme over the simple clipping/blanking schemes. The second contribution can be divided into two aspects: (i) we consider the performance analysis of a single-relay decode-and-forward (DF) cooperative relaying scheme over channels impaired by bursty impulsive noise. For this channel, the bit error rate (BER) performances of direct transmission and a DF relaying scheme using M-PSK modulation in the presence of Rayleigh fading with a MAP receiver are derived; (ii) as a continuation of single-relay collaborative WSN scheme, we propose a novel relay selection protocol for a multi-relay DF collaborative WSN taking into account the bursty impulsive noise. The proposed protocol chooses the N’th best relay considering both the channel gains and the states of the impulsive noise of the source-relay and relay-destination links. To analyze the performance of the proposed protocol, we first derive closed-form expressions for the probability density function (PDF) of the received SNR. Then, these PDFs are used to derive closed-form expressions for the BER and the outage probability. Finally, we also derive the asymptotic BER and outage expressions to quantify the diversity benefits. From the obtained results, it is seen that the proposed receivers based on the MAP detection criterion is the most suitable one for bursty impulsive noise environments as it has been designed according to the statistical behavior of the noise. Different from the aforementioned contributions, talked about the reliable detection of finite alphabets in the presence of bursty impulsive noise, in the thrid part, we investigate the optimal MMSE estimation for a scalar Gaussian source impaired by impulsive noise. In Chapter 5, the MMSE optimal Bayesian estimation for a scalar Gaussian source, in the presence of bursty impulsive noise is considered. On the other hand, in Chapter 6, we investigate the distributed estimation of a scalar Gaussian source in WSNs in the presence of Middleton class-A noise. From the obtained results we conclude that the proposed optimal MMSE estimator outperforms the linear MMSE estimator developed for Gaussian channel

    AN EFFICIENT INTERFERENCE AVOIDANCE SCHEME FOR DEVICE-TODEVICE ENABLED FIFTH GENERATION NARROWBAND INTERNET OF THINGS NETWOKS’

    Get PDF
    Narrowband Internet of Things (NB-IoT) is a low-power wide-area (LPWA) technology built on long-term evolution (LTE) functionalities and standardized by the 3rd-Generation Partnership Project (3GPP). Due to its support for massive machine-type communication (mMTC) and different IoT use cases with rigorous standards in terms of connection, energy efficiency, reachability, reliability, and latency, NB-IoT has attracted the research community. However, as the capacity needs for various IoT use cases expand, the LTE evolved packet core (EPC) system's numerous functionalities may become overburdened and suboptimal. Several research efforts are currently in progress to address these challenges. As a result, an overview of these efforts with a specific focus on the optimized architecture of the LTE EPC functionalities, the 5G architectural design for NB-IoT integration, the enabling technologies necessary for 5G NB-IoT, 5G new radio (NR) coexistence with NB-IoT, and feasible architectural deployment schemes of NB-IoT with cellular networks is discussed. This thesis also presents cloud-assisted relay with backscatter communication as part of a detailed study of the technical performance attributes and channel communication characteristics from the physical (PHY) and medium access control (MAC) layers of the NB-IoT, with a focus on 5G. The numerous drawbacks that come with simulating these systems are explored. The enabling market for NB-IoT, the benefits for a few use cases, and the potential critical challenges associated with their deployment are all highlighted. Fortunately, the cyclic prefix orthogonal frequency division multiplexing (CPOFDM) based waveform by 3GPP NR for improved mobile broadband (eMBB) services does not prohibit the use of other waveforms in other services, such as the NB-IoT service for mMTC. As a result, the coexistence of 5G NR and NB-IoT must be manageably orthogonal (or quasi-orthogonal) to minimize mutual interference that limits the form of freedom in the waveform's overall design. As a result, 5G coexistence with NB-IoT will introduce a new interference challenge, distinct from that of the legacy network, even though the NR's coexistence with NB-IoT is believed to improve network capacity and expand the coverage of the user data rate, as well as improves robust communication through frequency reuse. Interference challenges may make channel estimation difficult for NB-IoT devices, limiting the user performance and spectral efficiency. Various existing interference mitigation solutions either add to the network's overhead, computational complexity and delay or are hampered by low data rate and coverage. These algorithms are unsuitable for an NB-IoT network owing to the low-complexity nature. As a result, a D2D communication based interference-control technique becomes an effective strategy for addressing this problem. This thesis used D2D communication to decrease the network bottleneck in dense 5G NBIoT networks prone to interference. For D2D-enabled 5G NB-IoT systems, the thesis presents an interference-avoidance resource allocation that considers the less favourable cell edge NUEs. To simplify the algorithm's computing complexity and reduce interference power, the system divides the optimization problem into three sub-problems. First, in an orthogonal deployment technique using channel state information (CSI), the channel gain factor is leveraged by selecting a probable reuse channel with higher QoS control. Second, a bisection search approach is used to find the best power control that maximizes the network sum rate, and third, the Hungarian algorithm is used to build a maximum bipartite matching strategy to choose the optimal pairing pattern between the sets of NUEs and the D2D pairs. The proposed approach improves the D2D sum rate and overall network SINR of the 5G NB-IoT system, according to the numerical data. The maximum power constraint of the D2D pair, D2D's location, Pico-base station (PBS) cell radius, number of potential reuse channels, and cluster distance impact the D2D pair's performance. The simulation results achieve 28.35%, 31.33%, and 39% SINR performance higher than the ARSAD, DCORA, and RRA algorithms when the number of NUEs is twice the number of D2D pairs, and 2.52%, 14.80%, and 39.89% SINR performance higher than the ARSAD, RRA, and DCORA when the number of NUEs and D2D pairs are equal. As a result, a D2D sum rate increase of 9.23%, 11.26%, and 13.92% higher than the ARSAD, DCORA, and RRA when the NUE’s number is twice the number of D2D pairs, and a D2D’s sum rate increase of 1.18%, 4.64% and 15.93% higher than the ARSAD, RRA and DCORA respectively, with an equal number of NUEs and D2D pairs is achieved. The results demonstrate the efficacy of the proposed scheme. The thesis also addressed the problem where the cell-edge NUE's QoS is critical to challenges such as long-distance transmission, delays, low bandwidth utilization, and high system overhead that affect 5G NB-IoT network performance. In this case, most cell-edge NUEs boost their transmit power to maximize network throughput. Integrating cooperating D2D relaying technique into 5G NB-IoT heterogeneous network (HetNet) uplink spectrum sharing increases the system's spectral efficiency and interference power, further degrading the network. Using a max-max SINR (Max-SINR) approach, this thesis proposed an interference-aware D2D relaying strategy for 5G NB-IoT QoS improvement for a cell-edge NUE to achieve optimum system performance. The Lagrangian-dual technique is used to optimize the transmit power of the cell-edge NUE to the relay based on the average interference power constraint, while the relay to the NB-IoT base station (NBS) employs a fixed transmit power. To choose an optimal D2D relay node, the channel-to-interference plus noise ratio (CINR) of all available D2D relays is used to maximize the minimum cell-edge NUE's data rate while ensuring the cellular NUEs' QoS requirements are satisfied. Best harmonic mean, best-worst, half-duplex relay selection, and a D2D communication scheme were among the other relaying selection strategies studied. The simulation results reveal that the Max-SINR selection scheme outperforms all other selection schemes due to the high channel gain between the two communication devices except for the D2D communication scheme. The proposed algorithm achieves 21.27% SINR performance, which is nearly identical to the half-duplex scheme, but outperforms the best-worst and harmonic selection techniques by 81.27% and 40.29%, respectively. As a result, as the number of D2D relays increases, the capacity increases by 14.10% and 47.19%, respectively, over harmonic and half-duplex techniques. Finally, the thesis presents future research works on interference control in addition with the open research directions on PHY and MAC properties and a SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis presented in Chapter 2 to encourage further study on 5G NB-IoT

    one6G white paper, 6G technology overview:Second Edition, November 2022

    Get PDF
    6G is supposed to address the demands for consumption of mobile networking services in 2030 and beyond. These are characterized by a variety of diverse, often conflicting requirements, from technical ones such as extremely high data rates, unprecedented scale of communicating devices, high coverage, low communicating latency, flexibility of extension, etc., to non-technical ones such as enabling sustainable growth of the society as a whole, e.g., through energy efficiency of deployed networks. On the one hand, 6G is expected to fulfil all these individual requirements, extending thus the limits set by the previous generations of mobile networks (e.g., ten times lower latencies, or hundred times higher data rates than in 5G). On the other hand, 6G should also enable use cases characterized by combinations of these requirements never seen before, e.g., both extremely high data rates and extremely low communication latency). In this white paper, we give an overview of the key enabling technologies that constitute the pillars for the evolution towards 6G. They include: terahertz frequencies (Section 1), 6G radio access (Section 2), next generation MIMO (Section 3), integrated sensing and communication (Section 4), distributed and federated artificial intelligence (Section 5), intelligent user plane (Section 6) and flexible programmable infrastructures (Section 7). For each enabling technology, we first give the background on how and why the technology is relevant to 6G, backed up by a number of relevant use cases. After that, we describe the technology in detail, outline the key problems and difficulties, and give a comprehensive overview of the state of the art in that technology. 6G is, however, not limited to these seven technologies. They merely present our current understanding of the technological environment in which 6G is being born. Future versions of this white paper may include other relevant technologies too, as well as discuss how these technologies can be glued together in a coherent system
    corecore