16 research outputs found

    Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding for Ising MRF Models: Classical and Quantum Topology Machine Learning

    Full text link
    The paper introduces the application of information geometry to describe the ground states of Ising models by utilizing parity-check matrices of cyclic and quasi-cyclic codes on toric and spherical topologies. The approach establishes a connection between machine learning and error-correcting coding. This proposed approach has implications for the development of new embedding methods based on trapping sets. Statistical physics and number geometry applied for optimize error-correcting codes, leading to these embedding and sparse factorization methods. The paper establishes a direct connection between DNN architecture and error-correcting coding by demonstrating how state-of-the-art architectures (ChordMixer, Mega, Mega-chunk, CDIL, ...) from the long-range arena can be equivalent to of block and convolutional LDPC codes (Cage-graph, Repeat Accumulate). QC codes correspond to certain types of chemical elements, with the carbon element being represented by the mixed automorphism Shu-Lin-Fossorier QC-LDPC code. The connections between Belief Propagation and the Permanent, Bethe-Permanent, Nishimori Temperature, and Bethe-Hessian Matrix are elaborated upon in detail. The Quantum Approximate Optimization Algorithm (QAOA) used in the Sherrington-Kirkpatrick Ising model can be seen as analogous to the back-propagation loss function landscape in training DNNs. This similarity creates a comparable problem with TS pseudo-codeword, resembling the belief propagation method. Additionally, the layer depth in QAOA correlates to the number of decoding belief propagation iterations in the Wiberg decoding tree. Overall, this work has the potential to advance multiple fields, from Information Theory, DNN architecture design (sparse and structured prior graph topology), efficient hardware design for Quantum and Classical DPU/TPU (graph, quantize and shift register architect.) to Materials Science and beyond.Comment: 71 pages, 42 Figures, 1 Table, 1 Appendix. arXiv admin note: text overlap with arXiv:2109.08184 by other author

    Pilot sequence based IQ imbalance estimation and compensation

    Get PDF
    Abstract. As modern radio access technologies strive to achieve progressively higher data rates and to become increasingly more reliable, minimizing the effects of hardware imperfections becomes a priority. One of those imperfections is in-phase quadrature imbalance (IQI), caused by amplitude and phase response differences between the I and Q branches of the IQ demodulation process. IQI has been shown to deteriorate bit error rates, possibly compromise positioning performance, amongst other effects. Minimizing IQI by tightening hardware manufacturing constraints is not always a commercially viable approach, thus, baseband processing for IQI compensation provides an alternative. The thesis begins by presenting a study in IQI modeling for direct conversion receivers, we then derive a model for general imbalances and show that it reproduces the two most common models in the bibliography. We proceed by exploring some of the existing IQI compensation techniques and discussing their underlying assumptions, advantages, and possible relevant issues. A novel pilot-sequence based approach for tackling IQI estimation and compensation is introduced in this thesis. The idea is to minimize the square Frobenius norm of the error between candidate covariance matrices, which are functions of the candidate IQI parameters, and the sample covariance matrices, obtained from measurements. This new method is first presented in a positioning context with flat fading channels, where IQI compensation is used to improve the positioning estimates mean square error. The technique is then adapted to orthogonal frequency division multiplexing (OFDM) systems,including an version that exploits the 5G New Radio reference signals to estimate the IQI coefficients. We further generalize the new approach to solve joint transmitter and receiver IQI estimation and discuss the implementation details and suggested optimization techniques. The introduced methods are evaluated numerically in their corresponding chapters under a set of different conditions, such as varying signal-to-noise ratio, pilot sequence length, channel model, number of subcarriers, etc. Finally, the proposed compensation approach is compared to other well-established methods by evaluating the bit error rate curves of 5G transmissions. We consistently show that the proposed method is capable of outperforming these other methods if the SNR and pilot sequence length values are sufficiently high. In the positioning simulations, the proposed IQI compensation method was able to improve the root mean squared error (RMSE) of the position estimates by approximately 25 cm. In the OFDM scenario, with high SNR and a long pilot sequence, the new method produced estimates with mean squared error (MSE) about a million times smaller than those from a blind estimator. In bit error rate (BER) simulations, the new method was the only compensation technique capable of producing BER curves similar to the curves without IQI in all of the studied scenarios

    Polarization and Spatial Coupling:Two Techniques to Boost Performance

    Get PDF
    During the last two decades we have witnessed considerable activity in building bridges between the fields of information theory/communications, computer science, and statistical physics. This is due to the realization that many fundamental concepts and notions in these fields are in fact related and that each field can benefit from the insight and techniques developed in the others. For instance, the notion of channel capacity in information theory, threshold phenomena in computer science, and phase transitions in statistical physics are all expressions of the same concept. Therefore, it would be beneficial to develop a common framework that unifies these notions and that could help to leverage knowledge in one field to make progress in the others. A particularly striking example is the celebrated belief propagation algorithm. It was independently invented in each of these fields but for very different purposes. The realization of the commonality has benefited each of the areas. We investigate polarization and spatial coupling: two techniques that were originally invented in the context of channel coding (communications) thus resulting for the first time in efficient capacity-achieving codes for a wide range of channels. As we will discuss, both techniques play a fundamental role also in computer science and statistical physics and so these two techniques can be seen as further fundamental building blocks that unite all three areas. We demonstrate applications of these techniques, as well as the fundamental phenomena they provide. In more detail, this thesis consists of two parts. In the first part, we consider the technique of polarization and its resultant class of channel codes, called polar codes. Our main focus is the analysis and improvement of the behavior of polarization towards the most significant aspects of modern channel-coding theory: scaling laws, universality, and complexity (quantization). For each of these aspects, we derive fundamental laws that govern the behavior of polarization and polar codes. Even though we concentrate on applications in communications, the analysis that we provide is general and can be carried over to applications of polarization in computer science and statistical physics. As we will show, our investigations confirm some of the inherent strengths of polar codes such as their robustness with respect to quantization. But they also make clear in which aspects further improvement of polar codes is needed. For example, we will explain that the scaling behavior of polar codes is quite slow compared to the optimal one. Hence, further research is required in order to enhance the scaling behavior of polar codes towards optimality. In the second part of this thesis, we investigate spatial coupling. By now, there exists already a considerable literature on spatial coupling in the realm of information theory and communications. We therefore investigate mainly the impact of spatial coupling on the fields of statistical physics and computer science. We consider two well-known models. The first is the Curie-Weiss model that provides us with the simplest model for understanding the mechanism of spatial coupling in the perspective of statistical physics. Many fundamental features of spatial coupling can be simply explained here. In particular, we will show how the well-known Maxwell construction in statistical physics manifests itself through spatial coupling. We then focus on a much richer class of graphical models called constraint satisfaction problems (CSP) (e.g., K-SAT and Q-COL). These models are central to computer science. We follow a general framework: First, we introduce interpolation procedures for proving that the coupled and standard (un-coupled) models are fundamentally related, in that their static properties (such as their SAT/UNSAT threshold) are the same. We then use tools from spin glass theory (cavity method) to demonstrate the so-called phenomenon of threshold saturation in these coupled models. Finally, we present the algorithmic implications and argue that all these features provide a new avenue for obtaining better, provable, algorithmic lower bounds on static thresholds of the individual standard CSP models. We consider simple decimation algorithms (e.g., the unit clause propagation algorithm) for the coupled CSP models and provide a machinery to analyze these algorithms. These analyses enable us to observe that the algorithmic thresholds on the coupled model are significantly improved over the standard model. For some models (e.g., 3-SAT, 3-COL), these coupled algorithmic thresholds surpass the best lower bounds on the SAT/UNSAT threshold in the literature and provide us with a new lower bound. We conclude by pointing out that although we only considered some specific graphical models, our results are of general nature hence applicable to a broad set of models. In particular, a main contribution of this thesis is to firmly establish both polarization, as well as spatial coupling, in the common toolbox of information theory/communication, statistical physics, and computer science

    Modern Random Access for Satellite Communications

    Full text link
    The present PhD dissertation focuses on modern random access (RA) techniques. In the first part an slot- and frame-asynchronous RA scheme adopting replicas, successive interference cancellation and combining techniques is presented and its performance analysed. The comparison of both slot-synchronous and asynchronous RA at higher layer, follows. Next, the optimization procedure, for slot-synchronous RA with irregular repetitions, is extended to the Rayleigh block fading channel. Finally, random access with multiple receivers is considered.Comment: PhD Thesis, 196 page

    Distributed Processing Methods for Extra Large Scale MIMO

    Get PDF
    corecore