88 research outputs found

    Near-optimal stochastic MIMO signal detection with a mixture of t-distribution prior

    Full text link
    Multiple-input multiple-output (MIMO) systems will play a crucial role in future wireless communication, but improving their signal detection performance to increase transmission efficiency remains a challenge. To address this issue, we propose extending the discrete signal detection problem in MIMO systems to a continuous one and applying the Hamiltonian Monte Carlo method, an efficient Markov chain Monte Carlo algorithm. In our previous studies, we have used a mixture of normal distributions for the prior distribution. In this study, we propose using a mixture of t-distributions, which further improves detection performance. Based on our theoretical analysis and computer simulations, the proposed method can achieve near-optimal signal detection with polynomial computational complexity. This high-performance and practical MIMO signal detection could contribute to the development of the 6th-generation mobile network.Comment: to be published in the 2023 IEEE Global Communications Conference (GLOBECOM

    Doctor of Philosophy

    Get PDF
    dissertationThe continuous growth of wireless communication use has largely exhausted the limited spectrum available. Methods to improve spectral efficiency are in high demand and will continue to be for the foreseeable future. Several technologies have the potential to make large improvements to spectral efficiency and the total capacity of networks including massive multiple-input multiple-output (MIMO), cognitive radio, and spatial-multiplexing MIMO. Of these, spatial-multiplexing MIMO has the largest near-term potential as it has already been adopted in the WiFi, WiMAX, and LTE standards. Although transmitting independent MIMO streams is cheap and easy, with a mere linear increase in cost with streams, receiving MIMO is difficult since the optimal methods have exponentially increasing cost and power consumption. Suboptimal MIMO detectors such as K-Best have a drastically reduced complexity compared to optimal methods but still have an undesirable exponentially increasing cost with data-rate. The Markov Chain Monte Carlo (MCMC) detector has been proposed as a near-optimal method with polynomial cost, but it has a history of unusual performance issues which have hindered its adoption. In this dissertation, we introduce a revised derivation of the bitwise MCMC MIMO detector. The new approach resolves the previously reported high SNR stalling problem of MCMC without the need for hybridization with another detector method or adding heuristic temperature scaling terms. Another common problem with MCMC algorithms is an unknown convergence time making predictable fixed-length implementations problematic. When an insufficient number of iterations is used on a slowly converging example, the output LLRs can be unstable and overconfident, therefore, we develop a method to identify rare, slowly converging runs and mitigate their degrading effects on the soft-output information. This improves forward-error-correcting code performance and removes a symptomatic error floor in bit-error-rates. Next, pseudo-convergence is identified with a novel way to visualize the internal behavior of the Gibbs sampler. An effective and efficient pseudo-convergence detection and escape strategy is suggested. Finally, the new excited MCMC (X-MCMC) detector is shown to have near maximum-a-posteriori (MAP) performance even with challenging, realistic, highly-correlated channels at the maximum MIMO sizes and modulation rates supported by the 802.11ac WiFi specification, 8x8 256 QAM. Further, the new excited MCMC (X-MCMC) detector is demonstrated on an 8-antenna MIMO testbed with the 802.11ac WiFi protocol, confirming its high performance. Finally, a VLSI implementation of the X-MCMC detector is presented which retains the near-optimal performance of the floating-point algorithm while having one of the lowest complexities found in the near-optimal MIMO detector literature

    Lattice sampling algorithms for communications

    No full text
    In this thesis, we investigate the problem of decoding for wireless communications from the perspective of lattice sampling. In particular, computationally efficient lattice sampling algorithms are exploited to enhance the system performance, which enjoys the system tradeoff between performance and complexity through the sample size. Based on this idea, several novel lattice sampling algorithms are presented in this thesis. First of all, in order to address the inherent issues in the random sampling, derandomized sampling algorithm is proposed. Specifically, by setting a probability threshold to sample candidates, the whole sampling procedure becomes deterministic, leading to considerable performance improvement and complexity reduction over to the randomized sampling. According to the analysis and optimization, the correct decoding radius is given with the optimized parameter setting. Moreover, the upper bound on the sample size, which corresponds to near-maximum likelihood (ML) performance, is also derived. After that, the proposed derandomized sampling algorithm is introduced into the soft-output decoding of MIMO bit-interleaved coded modulation (BICM) systems to further improve the decoding performance. According to the demonstration, we show that the derandomized sampling algorithm is able to achieve the near-maximum a posteriori (MAP) performance in the soft-output decoding. We then extend the well-known Markov Chain Monte Carlo methods into the samplings from lattice Gaussian distribution, which has emerged as a common theme in lattice coding and decoding, cryptography, mathematics. We firstly show that the statistical Gibbs sampling is capable to perform the lattice Gaussian sampling. Then, a more efficient algorithm referred to as Gibbs-Klein sampling is proposed, which samples multiple variables block by block using Klein’s algorithm. After that, for the sake of convergence rate, we introduce the conventional statistical Metropolis-Hastings (MH) sampling into lattice Gaussian distributions and three MH-based sampling algorithms are then proposed. The first one, named as MH multivariate sampling algorithm, is demonstrated to have a faster convergence rate than Gibbs-Klein sampling. Next, the symmetrical distribution generated by Klein’s algorithm is taken as the proposal distribution, which offers an efficient way to perform the Metropolis sampling over high-dimensional models. Finally, the independent Metropolis-Hastings-Klein (MHK) algorithm is proposed, where the Markov chain arising from it is proved to converge to the stationary distribution exponentially fast. Furthermore, its convergence rate can be explicitly calculated in terms of the theta series, making it possible to predict the exact mixing time of the underlying Markov chain.Open Acces

    Sliced lattice Gaussian sampling: convergence improvement and decoding optimization

    Get PDF
    Sampling from the lattice Gaussian distribution has emerged as a key problem in coding and decoding while Markov chain Monte Carlo (MCMC) methods from statistics offer an effective way to solve it. In this paper, the sliced lattice Gaussian sampling algorithm is proposed to further improve the convergence performance of the Markov chain targeting at lattice Gaussian sampling. We demonstrate that the Markov chain arising from it is uniformly ergodic, namely, it converges exponentially fast to the stationary distribution. Meanwhile, the convergence rate of the underlying Markov chain is also investigated, and we show the proposed sliced sampling algorithm entails a better convergence performance than the independent Metropolis-Hastings-Klein (IMHK) sampling algorithm. On the other hand, the decoding performance based on the proposed sampling algorithm is analyzed, where the optimization with respect to the standard deviation σ>0 of the target lattice Gaussian distribution is given. After that, a judicious mechanism based on distance judgement and dynamic updating for choosing σ is proposed for a better decoding performance. Finally, simulation results based on multiple-input multiple-output (MIMO) detection are presented to confirm the performance gain by the convergence enhancement and the parameter optimization

    Analysis of Wireless Networks With Massive Connectivity

    Get PDF
    Recent years have witnessed unprecedented growth in wireless networks in terms of both data traffic and number of connected devices. How to support this fast increasing demand for high data traffic and connectivity is a key consideration in the design of future wireless communication systems. With this motivation, in this thesis, we focus on the analysis of wireless networks with massive connectivity. In the first part of the thesis, we seek to improve the energy efficiency (EE) of single-cell massive multiple-input multiple-output (MIMO) networks with joint antenna selection and user scheduling. We propose a two-step iterative procedure to maximize the EE. In each iteration, bisection search and random selection are used first to determine a subset of antennas with the users selected before, and then identify the EE-optimal subset of users with the selected antennas via cross entropy algorithm. Subsequently, we focus on the joint uplink and downlink EE maximization, under a limitation on the number of available radio frequency (RF) chains. With the Jensen\u27s inequality and the power consumption model, the original problem is converted into a combinatorial optimization problem. Utilizing the learning-based stochastic gradient descent framework and the rare event simulation method, we propose an efficient learning-based stochastic gradient descent algorithm to solve the corresponding combinatorial optimization problem. In the second part of the thesis, we focus on the joint activity detection and channel estimation in cell-free massive MIMO systems with massive connectivity. At first, we conduct an asymptotic analysis of single measurement vector (SMV) based minimum mean square error (MMSE) estimation in cell-free massive MIMO systems with massive connectivity. We establish a decoupling principle of SMV based MMSE estimation for sparse signal vectors with independent and non-identically distributed (i.n.i.d.) non-zero components. Subsequently, using the decoupling principle, likelihood ratio test and the optimal fusion rule, we obtain detection rules for the activity of users based on the received pilot signals at only one access point (AP), and also based on the cooperation of the received pilot signals from the entire set of APs for centralized and distributed detection. Moreover, we study the achievable uplink rates with zero-forcing (ZF) detector at the central processing unit (CPU) of the cell-free massive MIMO systems. In the third part, we focus on the performance analysis of intelligent reflecting surface (IRS) assisted wireless networks. Initially, we investigate the MMSE channel estimation for IRS assisted wireless communication systems. Then, we study the sparse activity detection problem in IRS assisted wireless networks. Specifically, employing the generalized approximate message passing (GAMP) algorithm, we obtain the MMSE estimates of the equivalent effective channel coefficients from the base station (BS) to all users, and transform the received pilot signals into additive Gaussian noise corrupted versions of the equivalent effective channel coefficients. Likelihood ratio test is used to acquire decisions on the activity of each user based on the Gaussian noise corrupted equivalent effective channel coefficients, and the optimal fusion rule is used to obtain the final decisions on the activity of all users based on the previous decisions on the activity of each user and the corresponding reliabilities. Finally, we conduct an asymptotic analysis of maximizing the weighted sum rate by joint beamforming and power allocation under transmit power and quality-of-service (QoS) constraints in IRS assisted wireless networks

    Approximate inference in massive MIMO scenarios with moment matching techniques

    Get PDF
    Mención Internacional en el título de doctorThis Thesis explores low-complexity inference probabilistic algorithms in high-dimensional Multiple-Input Multiple-Output (MIMO) systems and high order M-Quadrature Amplitude Modulation (QAM) constellations. Several modern communications systems are using more and more antennas to maximize spectral efficiency, in a new phenomena call Massive MIMO. However, as the number of antennas and/or the order of the constellation grow several technical issues have to be tackled, one of them is that the symbol detection complexity grows fast exponentially with the system dimension. Nowadays the design of massive MIMO low-complexity receivers is one important research line in MIMO because symbol detection can no longer rely on conventional approaches such as Maximum a Posteriori (MAP) due to its exponential computation complexity. This Thesis proposes two main results. On one hand a hard decision low-complexity MIMO detector based on Expectation Propagation (EP) algorithm which allows to iteratively approximate within polynomial cost the posterior distribution of the transmitted symbols. The receiver is named Expectation Propagation Detector (EPD) and its solution evolves from Minimum Mean Square Error (MMSE) solution and keeps per iteration the MMSE complexity which is dominated by a matrix inversion. Hard decision Symbol Error Rate (SER) performance is shown to remarkably improve state-of-the-art solutions of similar complexity. On the other hand, a soft-inference algorithm, more suitable to modern communication systems with channel codification techniques such as Low- Density Parity-Check (LDPC) codes, is also presented. Modern channel decoding techniques need as input Log-Likehood Ratio (LLR) information for each coded bit. In order to obtain that information, firstly a soft bit inference procedure must be performed. In low-dimensional scenarios, this can be done by marginalization over the symbol posterior distribution. However, this is not feasible at high-dimension. While EPD could provide this probabilistic information, it is shown that its probabilistic estimates are in general poor in the low Signal-to-Noise Ratio (SNR) regime. In order to solve this inconvenience a new algorithm based on the Expectation Consistency (EC) algorithm, which generalizes several algorithms such as Belief. Propagation (BP) and EP itself, was proposed. The proposed algorithm called Expectation Consistency Detector (ECD) maps the inference problem as an optimization over a non convex function. This new approach allows to find stationary points and tradeoffs between accuracy and convergence, which leads to robust update rules. At the same complexity cost than EPD, the new proposal achieves a performance closer to channel capacity at moderate SNR. The result reveals that the probabilistic detection accuracy has a relevant impact in the achievable rate of the overall system. Finally, a modified ECD algorithm is presented, with a Turbo receiver structure where the output of the decoder is fed back to ECD, achieving performance gains in all block lengths simulated. The document is structured as follows. In Chapter I an introduction to the MIMO scenario is presented, the advantages and challenges are exposed and the two main scenarios of this Thesis are set forth. Finally, the motivation behind this work, and the contributions are revealed. In Chapters II and III the state of the art and our proposal are presented for Hard Detection, whereas in Chapters IV and V are exposed for Soft Inference Detection. Eventually, a conclusion and future lines can be found in Chapter VI.Esta Tesis aborda algoritmos de baja complejidad para la estimación probabilística en sistemas de Multiple-Input Multiple-Output (MIMO) de grandes dimensiones con constelaciones M-Quadrature Amplitude Modulation (QAM) de alta dimensionalidad. Son diversos los sistemas de comunicaciones que en la actualidad están utilizando más y más antenas para maximizar la eficiencia espectral, en un nuevo fenómeno denominado Massive MIMO. Sin embargo los incrementos en el número de antenas y/o orden de la constelación presentan ciertos desafíos tecnológicos que deben ser considerados. Uno de ellos es la detección de los símbolos transmitidos en el sistema debido a que la complejidad aumenta más rápido que las dimensiones del sistema. Por tanto el diseño receptores para sistemas Massive MIMO de baja complejidad es una de las importantes líneas de investigación en la actualidad en MIMO, debido principalmente a que los métodos tradicionales no se pueden implementar en sistemas con decenas de antenas, cuando lo deseable serían centenas, debido a que su coste es exponencial. Los principales resultados en esta Tesis pueden clasificarse en dos. En primer lugar un receptor MIMO para decisión dura de baja complejidad basado en el algoritmo Expectation Propagation (EP) que permite de manera iterativa, con un coste computacional polinómico por iteración, aproximar la distribución a posteriori de los símbolos transmitidos. El algoritmo, denominado Expectation Propagation Detector (EPD), es inicializado con la solución del algoritmo Minimum Mean Square Error (MMSE) y mantiene el coste de este para todas las iteraciones, dominado por una inversión de matriz. El rendimiento del decisor en probabilidad de error de símbolo muestra ganancias remarcables con respecto a otros métodos en la literatura con una complejidad similar. En segundo lugar, un algoritmo que provee una estimación blanda, información que es más apropiada para los actuales sistemas de comunicaciones que utilizan codificación de canal, como pueden ser códigos Low-Density Parity-Check (LDPC). La información necesaria para estos decodificadores de canal es Log-Likehood Ratio (LLR) para cada uno de los bits codificados. En escenarios de bajas dimensiones se pueden calcular las marginales de la distribución a posteriori, pero en escenarios de grandes dimensiones no es viable, aunque EPD puede proporcionar este tipo de información a la entrada del decodificador, dicha información no es la mejor al estar el algoritmo pensado para detección dura, sobre todo se observa este fenómeno en el rango de baja Signal-to-Noise Ratio (SNR). Para solucionar este problema se propone un nuevo algoritmo basado en Expectation Consistency (EC) que engloba diversos algoritmos como pueden ser Belief Propagation (BP) y el algoritmo EP propuesto con anterioridad. El nuevo algoritmo llamado Expectation Consistency Detector (ECD), trata el problema como una optimización de una función no convexa. Esta aproximación permite encontrar los puntos estacionarios y la relación entre precisión y convergencia, que permitirán reglas de actualización más robustas y eficaces. Con la misma compleja que el algoritmo propuesto inicialmente, ECD permite rendimientos más próximos a la capacidad del canal en regímenes moderados de SNR. Los resultados muestran que la precisión tiene un gran efecto en la tasa que alcanza el sistema. Finalmente una versión modificada de ECD es propuesta en una arquitectura típica de los Turbo receptores, en la que la salida del decodificador es la entrada del receptor, y que permite ganancias en el rendimiento en todas las longitudes de código simuladas. El presente documento está estructurado de la siguiente manera. En el primer Capítulo I, se realiza una introducción a los sistemas MIMO, presentando sus ventajas, desventajas, problemas abiertos. Los modelos que se utilizaran en la tesis y la motivación con la que se inició esta tesis son expuestos en este primer capítulo. En los Capítulos II y III el estado del arte y nuestra propuesta para detección dura son presentados, mientras que en los Capítulos IV y V se presentan para detección suave. Finalmente las conclusiones que pueden obtenerse de esta Tesis y futuras líneas de investigación son expuestas en el Capítulo VI.Programa Oficial de Doctorado en Multimedia y ComunicacionesPresidente: Juan José Murillo Fuentes.- Secretario: Gonzalo Vázquez Vilar.- Vocal: María Isabel Valera Martíne

    Advances in approximate Bayesian computation and trans-dimensional sampling methodology

    Full text link
    Bayesian statistical models continue to grow in complexity, driven in part by a few key factors: the massive computational resources now available to statisticians; the substantial gains made in sampling methodology and algorithms such as Markov chain Monte Carlo (MCMC), trans-dimensional MCMC (TDMCMC), sequential Monte Carlo (SMC), adaptive algorithms and stochastic approximation methods and approximate Bayesian computation (ABC); and development of more realistic models for real world phenomena as demonstrated in this thesis for financial models and telecommunications engineering. Sophisticated statistical models are increasingly proposed for practical solutions to real world problems in order to better capture salient features of increasingly more complex data. With sophistication comes a parallel requirement for more advanced and automated statistical computational methodologies. The key focus of this thesis revolves around innovation related to the following three significant Bayesian research questions. 1. How can one develop practically useful Bayesian models and corresponding computationally efficient sampling methodology, when the likelihood model is intractable? 2. How can one develop methodology in order to automate Markov chain Monte Carlo sampling approaches to efficiently explore the support of a posterior distribution, defined across multiple Bayesian statistical models? 3. How can these sophisticated Bayesian modelling frameworks and sampling methodologies be utilized to solve practically relevant and important problems in the research fields of financial risk modeling and telecommunications engineering ? This thesis is split into three bodies of work represented in three parts. Each part contains journal papers with novel statistical model and sampling methodological development. The coherent link between each part involves the novel sampling methodologies developed in Part I and utilized in Part II and Part III. Papers contained in each part make progress at addressing the core research questions posed. Part I of this thesis presents generally applicable key statistical sampling methodologies that will be utilized and extended in the subsequent two parts. In particular it presents novel developments in statistical methodology pertaining to likelihood-free or ABC and TDMCMC methodology. The TDMCMC methodology focuses on several aspects of automation in the between model proposal construction, including approximation of the optimal between model proposal kernel via a conditional path sampling density estimator. Then this methodology is explored for several novel Bayesian model selection applications including cointegrated vector autoregressions (CVAR) models and mixture models in which there is an unknown number of mixture components. The second area relates to development of ABC methodology with particular focus on SMC Samplers methodology in an ABC context via Partial Rejection Control (PRC). In addition to novel algorithmic development, key theoretical properties are also studied for the classes of algorithms developed. Then this methodology is developed for a highly challenging practically significant application relating to multivariate Bayesian α\alpha-stable models. Then Part II focuses on novel statistical model development in the areas of financial risk and non-life insurance claims reserving. In each of the papers in this part the focus is on two aspects: foremost the development of novel statistical models to improve the modeling of risk and insurance; and then the associated problem of how to fit and sample from such statistical models efficiently. In particular novel statistical models are developed for Operational Risk (OpRisk) under a Loss Distributional Approach (LDA) and for claims reserving in Actuarial non-life insurance modelling. In each case the models developed include an additional level of complexity which adds flexibility to the model in order to better capture salient features observed in real data. The consequence of the additional complexity comes at the cost that standard fitting and sampling methodologies are generally not applicable, as a result one is required to develop and apply the methodology from Part I. Part III focuses on novel statistical model development in the area of statistical signal processing for wireless communications engineering. Statistical models will be developed or extended for two general classes of wireless communications problem: the first relates to detection of transmitted symbols and joint channel estimation in Multiple Input Multiple Output (MIMO) systems coupled with Orthogonal Frequency Division Multiplexing (OFDM); the second relates to co-operative wireless communications relay systems in which the key focus is on detection of transmitted symbols. Both these areas will require advanced sampling methodology developed in Part I to find solutions to these real world engineering problems

    Enabling Technologies for Ultra-Reliable and Low Latency Communications: From PHY and MAC Layer Perspectives

    Full text link
    © 1998-2012 IEEE. Future 5th generation networks are expected to enable three key services-enhanced mobile broadband, massive machine type communications and ultra-reliable and low latency communications (URLLC). As per the 3rd generation partnership project URLLC requirements, it is expected that the reliability of one transmission of a 32 byte packet will be at least 99.999% and the latency will be at most 1 ms. This unprecedented level of reliability and latency will yield various new applications, such as smart grids, industrial automation and intelligent transport systems. In this survey we present potential future URLLC applications, and summarize the corresponding reliability and latency requirements. We provide a comprehensive discussion on physical (PHY) and medium access control (MAC) layer techniques that enable URLLC, addressing both licensed and unlicensed bands. This paper evaluates the relevant PHY and MAC techniques for their ability to improve the reliability and reduce the latency. We identify that enabling long-term evolution to coexist in the unlicensed spectrum is also a potential enabler of URLLC in the unlicensed band, and provide numerical evaluations. Lastly, this paper discusses the potential future research directions and challenges in achieving the URLLC requirements

    AI meets CRNs : a prospective review on the application of deep architectures in spectrum management

    Get PDF
    The spectrum low utilization and high demand conundrum created a bottleneck towards ful lling the requirements of next-generation networks. The cognitive radio (CR) technology was advocated as a de facto technology to alleviate the scarcity and under-utilization of spectrum resources by exploiting temporarily vacant spectrum holes of the licensed spectrum bands. As a result, the CR technology became the rst step towards the intelligentization of mobile and wireless networks, and in order to strengthen its intelligent operation, the cognitive engine needs to be enhanced through the exploitation of arti cial intelligence (AI) strategies. Since comprehensive literature reviews covering the integration and application of deep architectures in cognitive radio networks (CRNs) are still lacking, this article aims at lling the gap by presenting a detailed review that addresses the integration of deep architectures into the intricacies of spectrum management. This is a prospective review whose primary objective is to provide an in-depth exploration of the recent trends in AI strategies employed in mobile and wireless communication networks. The existing reviews in this area have not considered the relevance of incorporating the mathematical fundamentals of each AI strategy and how to tailor them to speci c mobile and wireless networking problems. Therefore, this reviewaddresses that problem by detailing howdeep architectures can be integrated into spectrum management problems. Beyond reviewing different ways in which deep architectures can be integrated into spectrum management, model selection strategies and how different deep architectures can be tailored into the CR space to achieve better performance in complex environments are then reported in the context of future research directions.The Sentech Chair in Broadband Wireless Multimedia Communications (BWMC) at the University of Pretoria.http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6287639am2022Electrical, Electronic and Computer Engineerin
    • …
    corecore