142 research outputs found

    Lattice Gaussian Sampling by Markov Chain Monte Carlo: Bounded Distance Decoding and Trapdoor Sampling

    Get PDF
    Sampling from the lattice Gaussian distribution plays an important role in various research fields. In this paper, the Markov chain Monte Carlo (MCMC)-based sampling technique is advanced in several fronts. Firstly, the spectral gap for the independent Metropolis-Hastings-Klein (MHK) algorithm is derived, which is then extended to Peikert's algorithm and rejection sampling; we show that independent MHK exhibits faster convergence. Then, the performance of bounded distance decoding using MCMC is analyzed, revealing a flexible trade-off between the decoding radius and complexity. MCMC is further applied to trapdoor sampling, again offering a trade-off between security and complexity. Finally, the independent multiple-try Metropolis-Klein (MTMK) algorithm is proposed to enhance the convergence rate. The proposed algorithms allow parallel implementation, which is beneficial for practical applications.Comment: submitted to Transaction on Information Theor

    Approximate inference in massive MIMO scenarios with moment matching techniques

    Get PDF
    Mención Internacional en el título de doctorThis Thesis explores low-complexity inference probabilistic algorithms in high-dimensional Multiple-Input Multiple-Output (MIMO) systems and high order M-Quadrature Amplitude Modulation (QAM) constellations. Several modern communications systems are using more and more antennas to maximize spectral efficiency, in a new phenomena call Massive MIMO. However, as the number of antennas and/or the order of the constellation grow several technical issues have to be tackled, one of them is that the symbol detection complexity grows fast exponentially with the system dimension. Nowadays the design of massive MIMO low-complexity receivers is one important research line in MIMO because symbol detection can no longer rely on conventional approaches such as Maximum a Posteriori (MAP) due to its exponential computation complexity. This Thesis proposes two main results. On one hand a hard decision low-complexity MIMO detector based on Expectation Propagation (EP) algorithm which allows to iteratively approximate within polynomial cost the posterior distribution of the transmitted symbols. The receiver is named Expectation Propagation Detector (EPD) and its solution evolves from Minimum Mean Square Error (MMSE) solution and keeps per iteration the MMSE complexity which is dominated by a matrix inversion. Hard decision Symbol Error Rate (SER) performance is shown to remarkably improve state-of-the-art solutions of similar complexity. On the other hand, a soft-inference algorithm, more suitable to modern communication systems with channel codification techniques such as Low- Density Parity-Check (LDPC) codes, is also presented. Modern channel decoding techniques need as input Log-Likehood Ratio (LLR) information for each coded bit. In order to obtain that information, firstly a soft bit inference procedure must be performed. In low-dimensional scenarios, this can be done by marginalization over the symbol posterior distribution. However, this is not feasible at high-dimension. While EPD could provide this probabilistic information, it is shown that its probabilistic estimates are in general poor in the low Signal-to-Noise Ratio (SNR) regime. In order to solve this inconvenience a new algorithm based on the Expectation Consistency (EC) algorithm, which generalizes several algorithms such as Belief. Propagation (BP) and EP itself, was proposed. The proposed algorithm called Expectation Consistency Detector (ECD) maps the inference problem as an optimization over a non convex function. This new approach allows to find stationary points and tradeoffs between accuracy and convergence, which leads to robust update rules. At the same complexity cost than EPD, the new proposal achieves a performance closer to channel capacity at moderate SNR. The result reveals that the probabilistic detection accuracy has a relevant impact in the achievable rate of the overall system. Finally, a modified ECD algorithm is presented, with a Turbo receiver structure where the output of the decoder is fed back to ECD, achieving performance gains in all block lengths simulated. The document is structured as follows. In Chapter I an introduction to the MIMO scenario is presented, the advantages and challenges are exposed and the two main scenarios of this Thesis are set forth. Finally, the motivation behind this work, and the contributions are revealed. In Chapters II and III the state of the art and our proposal are presented for Hard Detection, whereas in Chapters IV and V are exposed for Soft Inference Detection. Eventually, a conclusion and future lines can be found in Chapter VI.Esta Tesis aborda algoritmos de baja complejidad para la estimación probabilística en sistemas de Multiple-Input Multiple-Output (MIMO) de grandes dimensiones con constelaciones M-Quadrature Amplitude Modulation (QAM) de alta dimensionalidad. Son diversos los sistemas de comunicaciones que en la actualidad están utilizando más y más antenas para maximizar la eficiencia espectral, en un nuevo fenómeno denominado Massive MIMO. Sin embargo los incrementos en el número de antenas y/o orden de la constelación presentan ciertos desafíos tecnológicos que deben ser considerados. Uno de ellos es la detección de los símbolos transmitidos en el sistema debido a que la complejidad aumenta más rápido que las dimensiones del sistema. Por tanto el diseño receptores para sistemas Massive MIMO de baja complejidad es una de las importantes líneas de investigación en la actualidad en MIMO, debido principalmente a que los métodos tradicionales no se pueden implementar en sistemas con decenas de antenas, cuando lo deseable serían centenas, debido a que su coste es exponencial. Los principales resultados en esta Tesis pueden clasificarse en dos. En primer lugar un receptor MIMO para decisión dura de baja complejidad basado en el algoritmo Expectation Propagation (EP) que permite de manera iterativa, con un coste computacional polinómico por iteración, aproximar la distribución a posteriori de los símbolos transmitidos. El algoritmo, denominado Expectation Propagation Detector (EPD), es inicializado con la solución del algoritmo Minimum Mean Square Error (MMSE) y mantiene el coste de este para todas las iteraciones, dominado por una inversión de matriz. El rendimiento del decisor en probabilidad de error de símbolo muestra ganancias remarcables con respecto a otros métodos en la literatura con una complejidad similar. En segundo lugar, un algoritmo que provee una estimación blanda, información que es más apropiada para los actuales sistemas de comunicaciones que utilizan codificación de canal, como pueden ser códigos Low-Density Parity-Check (LDPC). La información necesaria para estos decodificadores de canal es Log-Likehood Ratio (LLR) para cada uno de los bits codificados. En escenarios de bajas dimensiones se pueden calcular las marginales de la distribución a posteriori, pero en escenarios de grandes dimensiones no es viable, aunque EPD puede proporcionar este tipo de información a la entrada del decodificador, dicha información no es la mejor al estar el algoritmo pensado para detección dura, sobre todo se observa este fenómeno en el rango de baja Signal-to-Noise Ratio (SNR). Para solucionar este problema se propone un nuevo algoritmo basado en Expectation Consistency (EC) que engloba diversos algoritmos como pueden ser Belief Propagation (BP) y el algoritmo EP propuesto con anterioridad. El nuevo algoritmo llamado Expectation Consistency Detector (ECD), trata el problema como una optimización de una función no convexa. Esta aproximación permite encontrar los puntos estacionarios y la relación entre precisión y convergencia, que permitirán reglas de actualización más robustas y eficaces. Con la misma compleja que el algoritmo propuesto inicialmente, ECD permite rendimientos más próximos a la capacidad del canal en regímenes moderados de SNR. Los resultados muestran que la precisión tiene un gran efecto en la tasa que alcanza el sistema. Finalmente una versión modificada de ECD es propuesta en una arquitectura típica de los Turbo receptores, en la que la salida del decodificador es la entrada del receptor, y que permite ganancias en el rendimiento en todas las longitudes de código simuladas. El presente documento está estructurado de la siguiente manera. En el primer Capítulo I, se realiza una introducción a los sistemas MIMO, presentando sus ventajas, desventajas, problemas abiertos. Los modelos que se utilizaran en la tesis y la motivación con la que se inició esta tesis son expuestos en este primer capítulo. En los Capítulos II y III el estado del arte y nuestra propuesta para detección dura son presentados, mientras que en los Capítulos IV y V se presentan para detección suave. Finalmente las conclusiones que pueden obtenerse de esta Tesis y futuras líneas de investigación son expuestas en el Capítulo VI.Programa Oficial de Doctorado en Multimedia y ComunicacionesPresidente: Juan José Murillo Fuentes.- Secretario: Gonzalo Vázquez Vilar.- Vocal: María Isabel Valera Martíne

    Expectation Propagation Detection for High-Order High-Dimensional MIMO Systems

    Get PDF
    Modern communications systems use multiple-input multiple-output (MIMO) and high-order QAM constellations for maximizing spectral efficiency. However, as the number of antennas and the order of the constellation grow, the design of efficient and low-complexity MIMO receivers possesses big technical challenges. For example, symbol detection can no longer rely on maximum likelihood detection or sphere-decoding methods, as their complexity increases exponentially with the number of transmitters/receivers. In this paper, we propose a low-complexity high-accuracy MIMO symbol detector based on the Expectation Propagation (EP) algorithm. EP allows approximating iteratively at polynomial-time the posterior distribution of the transmitted symbols. We also show that our EP MIMO detector outperforms classic and state-of-The-Art solutions reducing the symbol error rate at a reduced computational complexity.This work has been partly funded by the Spanish Ministry of Science and Innovation with the projects GRE3NSYST (TEC2011- 29006-C03-03) and ALCIT (TEC2012-38800-C03-01) and by the program CONSOLIDER-INGENIO 2010 under the project COMONSENS (CSD 2008- 00010).Publicad

    Low-Complexity Near-Optimal Detection Algorithms for MIMO Systems

    Get PDF
    As the number of subscribers in wireless networks and their demanding data rate are exponentially increasing, multiple-input multiple-output (MIMO) systems have been scaled up in the 5G where tens to hundreds of antennas are deployed at base stations (BSs). However, by scaling up the MIMO systems, designing detectors with low computational complexity and close to the optimal error performance becomes challenging. In this dissertation, we study the problem of efficient detector designs for MIMO systems. In Chapter 2, we propose efficient detection algorithms for small and moderate MIMO systems by using lattice reduction and subspace (or conditional) detection techniques. The proposed algorithms exhibit full receive diversity and approach the bit error rate (BER) of the optimal maximum likelihood (ML) solution. For quasi-static channels, the complexity of the proposed schemes is cubic in the system dimension and is only linear in the size of the QAM modulation used. However, the computational complexity of lattice reduction algorithms imposes a large burden on the proposed detectors for large MIMO systems or fast fading channels. In Chapter 3, we propose detectors for large MIMO systems based on the combination of minimum mean square error decision feedback equalization (MMSE-DFE) and subspace detection tailored to an appropriate channel ordering. Although the achieved diversity order of the proposed detectors does not necessarily equal the full receive diversity for some MIMO systems, the coding gain allows for close to ML error performance at practical values of signal-to-noise ratio (SNR) at the cost of a small computational complexity increase over the classical MMSE- DFE detection. The receive diversity deficiency is addressed by proposing another algorithm in which a partial lattice reduction (PLR) technique is deployed to improve the diversity order. Massive multiuser MIMO (MU-MIMO) is another technology where the BS is equipped with hundreds of antennas and serves tens of single-antenna user terminals (UTs). For the uplink of massive MIMO systems, linear detectors, such as zero-forcing (ZF) and minimum mean square error (MMSE), approach the error performances of sophisticated nonlinear detectors. However, the exact solutions of ZF and MMSE involve matrix-matrix multiplication and matrix inversion operations which are expensive for massive MIMO systems. In Chapter 4, we propose efficient truncated polynomial expansion (TPE)-based detectors that achieve the error performance of the exact solutions with a computational complexity proportional to the system dimensions. The millimeter wave (mmWave) massive MIMO is another key technology for 5G cellular networks. By using hybrid beamforming techniques in which a few numbers of radio frequency (RF) chains are deployed at the BSs and the UTs, the fully-digital precoder (combiner) is approximated as a product of analog and digital precoders (combiners). In Chapter 5, we consider a signal detection scheme using the equivalent channel consisting of the precoder, mmWave channel, and combiner. The available structure in the equivalent channel enables us to achieve the BER of the optimal ML solution with a significant reduction in the computational complexity

    Lattice sampling algorithms for communications

    No full text
    In this thesis, we investigate the problem of decoding for wireless communications from the perspective of lattice sampling. In particular, computationally efficient lattice sampling algorithms are exploited to enhance the system performance, which enjoys the system tradeoff between performance and complexity through the sample size. Based on this idea, several novel lattice sampling algorithms are presented in this thesis. First of all, in order to address the inherent issues in the random sampling, derandomized sampling algorithm is proposed. Specifically, by setting a probability threshold to sample candidates, the whole sampling procedure becomes deterministic, leading to considerable performance improvement and complexity reduction over to the randomized sampling. According to the analysis and optimization, the correct decoding radius is given with the optimized parameter setting. Moreover, the upper bound on the sample size, which corresponds to near-maximum likelihood (ML) performance, is also derived. After that, the proposed derandomized sampling algorithm is introduced into the soft-output decoding of MIMO bit-interleaved coded modulation (BICM) systems to further improve the decoding performance. According to the demonstration, we show that the derandomized sampling algorithm is able to achieve the near-maximum a posteriori (MAP) performance in the soft-output decoding. We then extend the well-known Markov Chain Monte Carlo methods into the samplings from lattice Gaussian distribution, which has emerged as a common theme in lattice coding and decoding, cryptography, mathematics. We firstly show that the statistical Gibbs sampling is capable to perform the lattice Gaussian sampling. Then, a more efficient algorithm referred to as Gibbs-Klein sampling is proposed, which samples multiple variables block by block using Klein’s algorithm. After that, for the sake of convergence rate, we introduce the conventional statistical Metropolis-Hastings (MH) sampling into lattice Gaussian distributions and three MH-based sampling algorithms are then proposed. The first one, named as MH multivariate sampling algorithm, is demonstrated to have a faster convergence rate than Gibbs-Klein sampling. Next, the symmetrical distribution generated by Klein’s algorithm is taken as the proposal distribution, which offers an efficient way to perform the Metropolis sampling over high-dimensional models. Finally, the independent Metropolis-Hastings-Klein (MHK) algorithm is proposed, where the Markov chain arising from it is proved to converge to the stationary distribution exponentially fast. Furthermore, its convergence rate can be explicitly calculated in terms of the theta series, making it possible to predict the exact mixing time of the underlying Markov chain.Open Acces

    Sliced lattice Gaussian sampling: convergence improvement and decoding optimization

    Get PDF
    Sampling from the lattice Gaussian distribution has emerged as a key problem in coding and decoding while Markov chain Monte Carlo (MCMC) methods from statistics offer an effective way to solve it. In this paper, the sliced lattice Gaussian sampling algorithm is proposed to further improve the convergence performance of the Markov chain targeting at lattice Gaussian sampling. We demonstrate that the Markov chain arising from it is uniformly ergodic, namely, it converges exponentially fast to the stationary distribution. Meanwhile, the convergence rate of the underlying Markov chain is also investigated, and we show the proposed sliced sampling algorithm entails a better convergence performance than the independent Metropolis-Hastings-Klein (IMHK) sampling algorithm. On the other hand, the decoding performance based on the proposed sampling algorithm is analyzed, where the optimization with respect to the standard deviation σ>0 of the target lattice Gaussian distribution is given. After that, a judicious mechanism based on distance judgement and dynamic updating for choosing σ is proposed for a better decoding performance. Finally, simulation results based on multiple-input multiple-output (MIMO) detection are presented to confirm the performance gain by the convergence enhancement and the parameter optimization

    Wireless receiver designs: from information theory to VLSI implementation

    Get PDF
    Receiver design, especially equalizer design, in communications is a major concern in both academia and industry. It is a problem with both theoretical challenges and severe implementation hurdles. While much research has been focused on reducing complexity for optimal or near-optimal schemes, it is still common practice in industry to use simple techniques (such as linear equalization) that are generally significantly inferior. Although digital signal processing (DSP) technologies have been applied to wireless communications to enhance the throughput, the users' demands for more data and higher rate have revealed new challenges. For example, to collect the diversity and combat fading channels, in addition to the transmitter designs that enable the diversity, we also require the receiver to be able to collect the prepared diversity. Most wireless transmissions can be modeled as a linear block transmission system. Given a linear block transmission model assumption, maximum likelihood equalizers (MLEs) or near-ML decoders have been adopted at the receiver to collect diversity which is an important metric for performance, but these decoders exhibit high complexity. To reduce the decoding complexity, low-complexity equalizers, such as linear equalizers (LEs) and decision feedback equalizers (DFEs) are often adopted. These methods, however, may not utilize the diversity enabled by the transmitter and as a result have degraded performance compared to MLEs. In this dissertation, we will present efficient receiver designs that achieve low bit-error-rate (BER), high mutual information, and low decoding complexity. Our approach is to first investigate the error performance and mutual information of existing low-complexity equalizers to reveal the fundamental condition to achieve full diversity with LEs. We show that the fundamental condition for LEs to collect the same (outage) diversity as MLE is that the channels need to be constrained within a certain distance from orthogonality. The orthogonality deficiency (od) is adopted to quantify the distance of channels to orthogonality while other existing metrics are also introduced and compared. To meet the fundamental condition and achieve full diversity, a hybrid equalizer framework is proposed. The performance-complexity trade-off of hybrid equalizers is quantified by deriving the distribution of od. Another approach is to apply lattice reduction (LR) techniques to improve the ``quality' of channel matrices. We present two widely adopted LR methods in wireless communications, the Lenstra-Lenstra-Lovasz (LLL) algorithm [51] and Seysen's algorithm (SA), by providing detailed descriptions and pseudo codes. The properties of output matrices of the LLL algorithm and SA are also quantified. Furthermore, other LR algorithms are also briefly introduced. After introducing LR algorithms, we show how to adopt them into the wireless communication decoding process by presenting LR-aided hard-output detectors and LR-aided soft-output detectors for coded systems, respectively. We also analyze the performance of proposed efficient receivers from the perspective of diversity, mutual information, and complexity. We prove that LR techniques help to restore the diversity of low-complexity equalizers without increasing the complexity significantly. When it comes to practical systems and simulation tool, e.g., MATLAB, only finite bits are adopted to represent numbers. Therefore, we revisit the diversity analysis for finite-bit represented systems. We illustrate that the diversity of MLE for systems with finite-bit representation is determined by the number of non-vanishing eigenvalues. It is also shown that although theoretically LR-aided detectors collect the same diversity as MLE in the real/complex field, it may show different diversity orders when finite-bit representation exists. Finally, the VLSI implementation of the complex LLL algorithms is provided to verify the practicality of our proposed designs.Ph.D.Committee Chair: Ma, Xiaoli; Committee Member: Anderson, David; Committee Member: Barry, John; Committee Member: Chen, Xu-Yan; Committee Member: Kornegay, Kevi
    corecore