11 research outputs found

    Unitary Precoding and Basis Dependency of MMSE Performance for Gaussian Erasure Channels

    Get PDF
    We consider the transmission of a Gaussian vector source over a multi-dimensional Gaussian channel where a random or a fixed subset of the channel outputs are erased. Within the setup where the only encoding operation allowed is a linear unitary transformation on the source, we investigate the MMSE performance, both in average, and also in terms of guarantees that hold with high probability as a function of the system parameters. Under the performance criterion of average MMSE, necessary conditions that should be satisfied by the optimal unitary encoders are established and explicit solutions for a class of settings are presented. For random sampling of signals that have a low number of degrees of freedom, we present MMSE bounds that hold with high probability. Our results illustrate how the spread of the eigenvalue distribution and the unitary transformation contribute to these performance guarantees. The performance of the discrete Fourier transform (DFT) is also investigated. As a benchmark, we investigate the equidistant sampling of circularly wide-sense stationary (c.w.s.s.) signals, and present the explicit error expression that quantifies the effects of the sampling rate and the eigenvalue distribution of the covariance matrix of the signal. These findings may be useful in understanding the geometric dependence of signal uncertainty in a stochastic process. In particular, unlike information theoretic measures such as entropy, we highlight the basis dependence of uncertainty in a signal with another perspective. The unitary encoding space restriction exhibits the most and least favorable signal bases for estimation.Comment: Accepted for publication in IEEE Transactions on Information Theor

    Lattice Erasure Codes of Low Rank with Noise Margins

    Full text link
    We consider the following generalization of an (n,k)(n,k) MDS code for application to an erasure channel with additive noise. Like an MDS code, our code is required to be decodable from any kk received symbols, in the absence of noise. In addition, we require that the noise margin for every allowable erasure pattern be as large as possible and that the code satisfy a power constraint. In this paper we derive performance bounds and present a few designs for low rank lattice codes for an additive noise channel with erasures

    Noncooperative and Cooperative Transmission Schemes with Precoding and Beamforming

    Get PDF
    The next generation mobile networks are expected to provide multimedia applications with a high quality of service. On the other hand, interference among multiple base stations (BS) that co-exist in the same location limits the capacity of wireless networks. In conventional wireless networks, the base stations do not cooperate with each other. The BSs transmit individually to their respective mobile stations (MS) and treat the transmission from other BSs as interference. An alternative to this structure is a network cooperation structure. Here, BSs cooperate with other BSs to simultaneously transmit to their respective MSs using the same frequency band at a given time slot. By doing this, we significantly increase the capacity of the networks. This thesis presents novel research results on a noncooperative transmission scheme and a cooperative transmission scheme for multi-user multiple-input-multiple-output orthogonal frequency division multiplexing (MIMO-OFDM). We first consider the performance limit of a noncooperative transmission scheme. Here, we propose a method to reduce the interference and increase the throughput of orthogonal frequency division multiplexing (OFDM) systems in co-working wireless local area networks (WLANs) by using joint adaptive multiple antennas(AMA) and adaptive modulation (AM) with acknowledgement (ACK) Eigen-steering. The calculation of AMA and AM are performed at the receiver. The AMA is used to suppress interference and to maximize the signal-to-interference-plus-noise ratio (SINR). The AM scheme is used to allocate OFDM sub-carriers, power, and modulation mode subject to the constraints of power, discrete modulation, and the bit error rate (BER). The transmit weights, the allocation of power, and the allocation of sub-carriers are obtained at the transmitter using ACK Eigen-steering. The derivations of AMA, AM, and ACK Eigen-steering are shown. The performance of joint AMA and AM for various AMA configurations is evaluated through the simulations of BER and spectral efficiency (SE) against SIR. To improve the performance of the system further, we propose a practical cooperative transmission scheme to mitigate against the interference in co-working WLANs. Here, we consider a network coordination among BSs. We employ Tomlinson Harashima precoding (THP), joint transmit-receive beamforming based on SINR (signal-to-interference-plus-noise-ratio) maximization, and an adaptive precoding order to eliminate co-working interference and achieve bit error rate (BER) fairness among different users. We also consider the design of the system when partial channel state information (CSI) (where each user only knows its own CSI) and full CSI (where each user knows CSI of all users) are available at the receiver respectively. We prove analytically and by simulation that the performance of our proposed scheme will not be degraded under partial CSI. The simulation results show that the proposed scheme considerably outperforms both the existing noncooperative and cooperative transmission schemes. A method to design a spectrally efficient cooperative downlink transmission scheme employing precoding and beamforming is also proposed. The algorithm eliminates the interference and achieves symbol error rate (SER) fairness among different users. To eliminate the interference, Tomlinson Harashima precoding (THP) is used to cancel part of the interference while the transmit-receive antenna weights cancel the remaining one. A new novel iterative method is applied to generate the transmit-receive antenna weights. To achieve SER fairness among different users and further improve the performance of MIMO systems, we develop algorithms that provide equal SINR across all users and order the users so that the minimum SINR for each user is maximized. The simulation results show that the proposed scheme considerably outperforms existing cooperative transmission schemes in terms of the SER performance and complexity and approaches an interference free performance under the same configuration. We could improve the performance of the proposed interference cancellation further. This is because the proposed interference cancellation does not consider receiver noise when calculating the transmit-receive weight antennas. In addition, the proposed scheme mentioned above is designed specifically for a single-stream multi-user transmission. Here, we employ THP precoding and an iterative method based on the uplink-downlink duality principle to generate the transmit-receive antenna weights. The algorithm provides an equal SINR across all users. A simpler method is then proposed by trading off the complexity with a slight performance degradation. The proposed methods are extended to also work when the receiver does not have complete Channel State Informations (CSIs). A new method of setting the user precoding order, which has a much lower complexity than the VBLAST type ordering scheme but with almost the same performance, is also proposed. The simulation results show that the proposed schemes considerably outperform existing cooperative transmission schemes in terms of SER performance and approach an interference free performance. In all the cooperative transmission schemes proposed above, we use THP to cancel part of the interference. In this thesis, we also consider an alternative approach that bypasses the use of THP. The task of cancelling the interference from other users now lies solely within the transmit-receive antenna weights. We consider multiuser Gaussian broadcast channels with multiple antennas at both transmitter and receivers. An iterative multiple beamforming (IMB) algorithm is proposed, which is flexible in the antenna configuration and performs well in low to moderate data rates. Its capacity and bit error rate performance are compared with the ones achieved by the traditional zero-forcing method

    Noncooperative and Cooperative Transmission Schemes with Precoding and Beamforming

    Get PDF
    The next generation mobile networks are expected to provide multimedia applications with a high quality of service. On the other hand, interference among multiple base stations (BS) that co-exist in the same location limits the capacity of wireless networks. In conventional wireless networks, the base stations do not cooperate with each other. The BSs transmit individually to their respective mobile stations (MS) and treat the transmission from other BSs as interference. An alternative to this structure is a network cooperation structure. Here, BSs cooperate with other BSs to simultaneously transmit to their respective MSs using the same frequency band at a given time slot. By doing this, we significantly increase the capacity of the networks. This thesis presents novel research results on a noncooperative transmission scheme and a cooperative transmission scheme for multi-user multiple-input-multiple-output orthogonal frequency division multiplexing (MIMO-OFDM). We first consider the performance limit of a noncooperative transmission scheme. Here, we propose a method to reduce the interference and increase the throughput of orthogonal frequency division multiplexing (OFDM) systems in co-working wireless local area networks (WLANs) by using joint adaptive multiple antennas(AMA) and adaptive modulation (AM) with acknowledgement (ACK) Eigen-steering. The calculation of AMA and AM are performed at the receiver. The AMA is used to suppress interference and to maximize the signal-to-interference-plus-noise ratio (SINR). The AM scheme is used to allocate OFDM sub-carriers, power, and modulation mode subject to the constraints of power, discrete modulation, and the bit error rate (BER). The transmit weights, the allocation of power, and the allocation of sub-carriers are obtained at the transmitter using ACK Eigen-steering. The derivations of AMA, AM, and ACK Eigen-steering are shown. The performance of joint AMA and AM for various AMA configurations is evaluated through the simulations of BER and spectral efficiency (SE) against SIR. To improve the performance of the system further, we propose a practical cooperative transmission scheme to mitigate against the interference in co-working WLANs. Here, we consider a network coordination among BSs. We employ Tomlinson Harashima precoding (THP), joint transmit-receive beamforming based on SINR (signal-to-interference-plus-noise-ratio) maximization, and an adaptive precoding order to eliminate co-working interference and achieve bit error rate (BER) fairness among different users. We also consider the design of the system when partial channel state information (CSI) (where each user only knows its own CSI) and full CSI (where each user knows CSI of all users) are available at the receiver respectively. We prove analytically and by simulation that the performance of our proposed scheme will not be degraded under partial CSI. The simulation results show that the proposed scheme considerably outperforms both the existing noncooperative and cooperative transmission schemes. A method to design a spectrally efficient cooperative downlink transmission scheme employing precoding and beamforming is also proposed. The algorithm eliminates the interference and achieves symbol error rate (SER) fairness among different users. To eliminate the interference, Tomlinson Harashima precoding (THP) is used to cancel part of the interference while the transmit-receive antenna weights cancel the remaining one. A new novel iterative method is applied to generate the transmit-receive antenna weights. To achieve SER fairness among different users and further improve the performance of MIMO systems, we develop algorithms that provide equal SINR across all users and order the users so that the minimum SINR for each user is maximized. The simulation results show that the proposed scheme considerably outperforms existing cooperative transmission schemes in terms of the SER performance and complexity and approaches an interference free performance under the same configuration. We could improve the performance of the proposed interference cancellation further. This is because the proposed interference cancellation does not consider receiver noise when calculating the transmit-receive weight antennas. In addition, the proposed scheme mentioned above is designed specifically for a single-stream multi-user transmission. Here, we employ THP precoding and an iterative method based on the uplink-downlink duality principle to generate the transmit-receive antenna weights. The algorithm provides an equal SINR across all users. A simpler method is then proposed by trading off the complexity with a slight performance degradation. The proposed methods are extended to also work when the receiver does not have complete Channel State Informations (CSIs). A new method of setting the user precoding order, which has a much lower complexity than the VBLAST type ordering scheme but with almost the same performance, is also proposed. The simulation results show that the proposed schemes considerably outperform existing cooperative transmission schemes in terms of SER performance and approach an interference free performance. In all the cooperative transmission schemes proposed above, we use THP to cancel part of the interference. In this thesis, we also consider an alternative approach that bypasses the use of THP. The task of cancelling the interference from other users now lies solely within the transmit-receive antenna weights. We consider multiuser Gaussian broadcast channels with multiple antennas at both transmitter and receivers. An iterative multiple beamforming (IMB) algorithm is proposed, which is flexible in the antenna configuration and performs well in low to moderate data rates. Its capacity and bit error rate performance are compared with the ones achieved by the traditional zero-forcing method

    A Critical Review of Physical Layer Security in Wireless Networking

    Get PDF
    Wireless networking has kept evolving with additional features and increasing capacity. Meanwhile, inherent characteristics of wireless networking make it more vulnerable than wired networks. In this thesis we present an extensive and comprehensive review of physical layer security in wireless networking. Different from cryptography, physical layer security, emerging from the information theoretic assessment of secrecy, could leverage the properties of wireless channel for security purpose, by either enabling secret communication without the need of keys, or facilitating the key agreement process. Hence we categorize existing literature into two main branches, namely keyless security and key-based security. We elaborate the evolution of this area from the early theoretic works on the wiretap channel, to its generalizations to more complicated scenarios including multiple-user, multiple-access and multiple-antenna systems, and introduce not only theoretical results but practical implementations. We critically and systematically examine the existing knowledge by analyzing the fundamental mechanics for each approach. Hence we are able to highlight advantages and limitations of proposed techniques, as well their interrelations, and bring insights into future developments of this area

    MIMO Systems

    Get PDF
    In recent years, it was realized that the MIMO communication systems seems to be inevitable in accelerated evolution of high data rates applications due to their potential to dramatically increase the spectral efficiency and simultaneously sending individual information to the corresponding users in wireless systems. This book, intends to provide highlights of the current research topics in the field of MIMO system, to offer a snapshot of the recent advances and major issues faced today by the researchers in the MIMO related areas. The book is written by specialists working in universities and research centers all over the world to cover the fundamental principles and main advanced topics on high data rates wireless communications systems over MIMO channels. Moreover, the book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Constrained Linear and Non-Linear Adaptive Equalization Techniques for MIMO-CDMA Systems

    Get PDF
    Researchers have shown that by combining multiple input multiple output (MIMO) techniques with CDMA then higher gains in capacity, reliability and data transmission speed can be attained. But a major drawback of MIMO-CDMA systems is multiple access interference (MAI) which can reduce the capacity and increase the bit error rate (BER), so statistical analysis of MAI becomes a very important factor in the performance analysis of these systems. In this thesis, a detailed analysis of MAI is performed for binary phase-shift keying (BPSK) signals with random signature sequence in Raleigh fading environment and closed from expressions for the probability density function of MAI and MAI with noise are derived. Further, probability of error is derived for the maximum Likelihood receiver. These derivations are verified through simulations and are found to reinforce the theoretical results. Since the performance of MIMO suffers significantly from MAI and inter-symbol interference (ISI), equalization is needed to mitigate these effects. It is well known from the theory of constrained optimization that the learning speed of any adaptive filtering algorithm can be increased by adding a constraint to it, as in the case of the normalized least mean squared (NLMS) algorithm. Thus, in this work both linear and non-linear decision feedback (DFE) equalizers for MIMO systems with least mean square (LMS) based constrained stochastic gradient algorithm have been designed. More specifically, an LMS algorithm has been developed , which was equipped with the knowledge of number of users, spreading sequence (SS) length, additive noise variance as well as MAI with noise (new constraint) and is named MIMO-CDMA MAI with noise constrained (MNCLMS) algorithm. Convergence and tracking analysis of the proposed algorithm are carried out in the scenario of interference and noise limited systems, and simulation results are presented to compare the performance of MIMO-CDMA MNCLMS algorithm with other adaptive algorithms

    Signal representation and recovery under measurement constraints

    Get PDF
    Ankara : The Department of Electrical and Electronics Engineering and the Graduate School of Engineering and Science of Bilkent University, 2012.Thesis (Ph. D.) -- Bilkent University, 2012.Includes bibliographical references.We are concerned with a family of signal representation and recovery problems under various measurement restrictions. We focus on finding performance bounds for these problems where the aim is to reconstruct a signal from its direct or indirect measurements. One of our main goals is to understand the effect of different forms of finiteness in the sampling process, such as finite number of samples or finite amplitude accuracy, on the recovery performance. In the first part of the thesis, we use a measurement device model in which each device has a cost that depends on the amplitude accuracy of the device: the cost of a measurement device is primarily determined by the number of amplitude levels that the device can reliably distinguish; devices with higher numbers of distinguishable levels have higher costs. We also assume that there is a limited cost budget so that it is not possible to make a high amplitude resolution measurement at every point. We investigate the optimal allocation of cost budget to the measurement devices so as to minimize estimation error. In contrast to common practice which often treats sampling and quantization separately, we have explicitly focused on the interplay between limited spatial resolution and limited amplitude accuracy. We show that in certain cases, sampling at rates different than the Nyquist rate is more efficient. We find the optimal sampling rates, and the resulting optimal error-cost trade-off curves. In the second part of the thesis, we formulate a set of measurement problems with the aim of reaching a better understanding of the relationship between geometry of statistical dependence in measurement space and total uncertainty of the signal. These problems are investigated in a mean-square error setting under the assumption of Gaussian signals. An important aspect of our formulation is our focus on the linear unitary transformation that relates the canonical signal domain and the measurement domain. We consider measurement set-ups in which a random or a fixed subset of the signal components in the measurement space are erased. We investigate the error performance, both We are concerned with a family of signal representation and recovery problems under various measurement restrictions. We focus on finding performance bounds for these problems where the aim is to reconstruct a signal from its direct or indirect measurements. One of our main goals is to understand the effect of different forms of finiteness in the sampling process, such as finite number of samples or finite amplitude accuracy, on the recovery performance. In the first part of the thesis, we use a measurement device model in which each device has a cost that depends on the amplitude accuracy of the device: the cost of a measurement device is primarily determined by the number of amplitude levels that the device can reliably distinguish; devices with higher numbers of distinguishable levels have higher costs. We also assume that there is a limited cost budget so that it is not possible to make a high amplitude resolution measurement at every point. We investigate the optimal allocation of cost budget to the measurement devices so as to minimize estimation error. In contrast to common practice which often treats sampling and quantization separately, we have explicitly focused on the interplay between limited spatial resolution and limited amplitude accuracy. We show that in certain cases, sampling at rates different than the Nyquist rate is more efficient. We find the optimal sampling rates, and the resulting optimal error-cost trade-off curves. In the second part of the thesis, we formulate a set of measurement problems with the aim of reaching a better understanding of the relationship between geometry of statistical dependence in measurement space and total uncertainty of the signal. These problems are investigated in a mean-square error setting under the assumption of Gaussian signals. An important aspect of our formulation is our focus on the linear unitary transformation that relates the canonical signal domain and the measurement domain. We consider measurement set-ups in which a random or a fixed subset of the signal components in the measurement space are erased. We investigate the error performance, both We are concerned with a family of signal representation and recovery problems under various measurement restrictions. We focus on finding performance bounds for these problems where the aim is to reconstruct a signal from its direct or indirect measurements. One of our main goals is to understand the effect of different forms of finiteness in the sampling process, such as finite number of samples or finite amplitude accuracy, on the recovery performance. In the first part of the thesis, we use a measurement device model in which each device has a cost that depends on the amplitude accuracy of the device: the cost of a measurement device is primarily determined by the number of amplitude levels that the device can reliably distinguish; devices with higher numbers of distinguishable levels have higher costs. We also assume that there is a limited cost budget so that it is not possible to make a high amplitude resolution measurement at every point. We investigate the optimal allocation of cost budget to the measurement devices so as to minimize estimation error. In contrast to common practice which often treats sampling and quantization separately, we have explicitly focused on the interplay between limited spatial resolution and limited amplitude accuracy. We show that in certain cases, sampling at rates different than the Nyquist rate is more efficient. We find the optimal sampling rates, and the resulting optimal error-cost trade-off curves. In the second part of the thesis, we formulate a set of measurement problems with the aim of reaching a better understanding of the relationship between geometry of statistical dependence in measurement space and total uncertainty of the signal. These problems are investigated in a mean-square error setting under the assumption of Gaussian signals. An important aspect of our formulation is our focus on the linear unitary transformation that relates the canonical signal domain and the measurement domain. We consider measurement set-ups in which a random or a fixed subset of the signal components in the measurement space are erased. We investigate the error performance, both in the average, and also in terms of guarantees that hold with high probability, as a function of system parameters. Our investigation also reveals a possible relationship between the concept of coherence of random fields as defined in optics, and the concept of coherence of bases as defined in compressive sensing, through the fractional Fourier transform. We also consider an extension of our discussions to stationary Gaussian sources. We find explicit expressions for the mean-square error for equidistant sampling, and comment on the decay of error introduced by using finite-length representations instead of infinite-length representations.Özçelikkale Hünerli, AyçaPh.D

    Baseband Processing for 5G and Beyond: Algorithms, VLSI Architectures, and Co-design

    Get PDF
    In recent years the number of connected devices and the demand for high data-rates have been significantly increased. This enormous growth is more pronounced by the introduction of the Internet of things (IoT) in which several devices are interconnected to exchange data for various applications like smart homes and smart cities. Moreover, new applications such as eHealth, autonomous vehicles, and connected ambulances set new demands on the reliability, latency, and data-rate of wireless communication systems, pushing forward technology developments. Massive multiple-input multiple-output (MIMO) is a technology, which is employed in the 5G standard, offering the benefits to fulfill these requirements. In massive MIMO systems, base station (BS) is equipped with a very large number of antennas, serving several users equipments (UEs) simultaneously in the same time and frequency resource. The high spatial multiplexing in massive MIMO systems, improves the data rate, energy and spectral efficiencies as well as the link reliability of wireless communication systems. The link reliability can be further improved by employing channel coding technique. Spatially coupled serially concatenated codes (SC-SCCs) are promising channel coding schemes, which can meet the high-reliability demands of wireless communication systems beyond 5G (B5G). Given the close-to-capacity error correction performance and the potential to implement a high-throughput decoder, this class of code can be a good candidate for wireless systems B5G. In order to achieve the above-mentioned advantages, sophisticated algorithms are required, which impose challenges on the baseband signal processing. In case of massive MIMO systems, the processing is much more computationally intensive and the size of required memory to store channel data is increased significantly compared to conventional MIMO systems, which are due to the large size of the channel state information (CSI) matrix. In addition to the high computational complexity, meeting latency requirements is also crucial. Similarly, the decoding-performance gain of SC-SCCs also do come at the expense of increased implementation complexity. Moreover, selecting the proper choice of design parameters, decoding algorithm, and architecture will be challenging, since spatial coupling provides new degrees of freedom in code design, and therefore the design space becomes huge. The focus of this thesis is to perform co-optimization in different design levels to address the aforementioned challenges/requirements. To this end, we employ system-level characteristics to develop efficient algorithms and architectures for the following functional blocks of digital baseband processing. First, we present a fast Fourier transform (FFT), an inverse FFT (IFFT), and corresponding reordering scheme, which can significantly reduce the latency of orthogonal frequency-division multiplexing (OFDM) demodulation and modulation as well as the size of reordering memory. The corresponding VLSI architectures along with the application specific integrated circuit (ASIC) implementation results in a 28 nm CMOS technology are introduced. In case of a 2048-point FFT/IFFT, the proposed design leads to 42% reduction in the latency and size of reordering memory. Second, we propose a low-complexity massive MIMO detection scheme. The key idea is to exploit channel sparsity to reduce the size of CSI matrix and eventually perform linear detection followed by a non-linear post-processing in angular domain using the compressed CSI matrix. The VLSI architecture for a massive MIMO with 128 BS antennas and 16 UEs along with the synthesis results in a 28 nm technology are presented. As a result, the proposed scheme reduces the complexity and required memory by 35%–73% compared to traditional detectors while it has better detection performance. Finally, we perform a comprehensive design space exploration for the SC-SCCs to investigate the effect of different design parameters on decoding performance, latency, complexity, and hardware cost. Then, we develop different decoding algorithms for the SC-SCCs and discuss the associated decoding performance and complexity. Also, several high-level VLSI architectures along with the corresponding synthesis results in a 12 nm process are presented, and various design tradeoffs are provided for these decoding schemes
    corecore