231 research outputs found
Analysis and Design of Algorithms for the Improvement of Non-coherent Massive MIMO based on DMPSK for beyond 5G systems
Mención Internacional en el título de doctorNowadays, it is nearly impossible to think of a service that does not rely on wireless communications.
By the end of 2022, mobile internet represented a 60% of the total global online traffic.
There is an increasing trend both in the number of subscribers and in the traffic handled by each
subscriber. Larger data rates, smaller extreme-to-extreme (E2E) delays and greater number of
devices are current interests for the development of mobile communications. Furthermore, it
is foreseen that these demands should also be fulfilled in scenarios with stringent conditions,
such as very fast varying wireless communications channels (either in time or frequency) or
scenarios with power constraints, mainly found when the equipment is battery powered.
Since most of the wireless communications techniques and standards rely on the fact that the
wireless channel is somehow characterized or estimated to be pre or post-compensated in transmission
(TX) or reception (RX), there is a clear problem when the channels vary rapidly or the
available power is constrained. To estimate the wireless channel and obtain the so-called channel
state information (CSI), some of the available resources (either in time, frequency or any
other dimension), are utilized by including known signals in the TX and RX typically known as
pilots, thus avoiding their use for data transmission. If the channels vary rapidly, they must be
estimated many times, which results in a very low data efficiency of the communications link.
Also, in case the power is limited or the wireless link distance is large, the resulting signal-tointerference-
plus-noise ratio (SINR) will be low, which is a parameter that is directly related to
the quality of the channel estimation and the performance of the data reception. This problem
is aggravated in massive multiple-input multiple-output (massive MIMO), which is a promising
technique for future wireless communications since it can increase the data rates, increase the
reliability and cope with a larger number of simultaneous devices. In massive MIMO, the base
station (BS) is typically equipped with a large number of antennas that are coordinated. In these
scenarios, the channels must be estimated for each antenna (or at least for each user), and thus,
the aforementioned problem of channel estimation aggravates. In this context, algorithms and
techniques for massive MIMO without CSI are of interest.
This thesis main topic is non-coherent massive multiple-input multiple-output (NC-mMIMO)
which relies on the use of differential M-ary phase shift keying (DMPSK) and the spatial
diversity of the antenna arrays to be able to detect the useful transmitted data without CSI knowledge. On the one hand, hybrid schemes that combine the coherent and non-coherent
schemes allowing to get the best of both worlds are proposed. These schemes are based on
distributing the resources between non-coherent (NC) and coherent data, utilizing the NC data
to estimate the channel without using pilots and use the estimated channel for the coherent
data. On the other hand, new constellations and user allocation strategies for the multi-user
scenario of NC-mMIMO are proposed. The new constellations are better than the ones in the
literature and obtained using artificial intelligence techniques, more concretely evolutionary
computation.This work has received funding from the European Union Horizon 2020 research and innovation
programme under the Marie Skłodowska-Curie ETN TeamUp5G, grant agreement No.
813391. The PhD student was the Early Stage Researcher (ESR) number 2 of the project.
This work has also received funding from the Spanish National Project IRENE-EARTH
(PID2020-115323RB-C33) (MINECO/AEI/FEDER, UE), which funded the work of some coauthors.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidente: Luis Castedo Ribas.- Secretario: Matilde Pilar Sánchez Fernández.- Vocal: Eva Lagunas Targaron
MIMO Systems
In recent years, it was realized that the MIMO communication systems seems to be inevitable in accelerated evolution of high data rates applications due to their potential to dramatically increase the spectral efficiency and simultaneously sending individual information to the corresponding users in wireless systems. This book, intends to provide highlights of the current research topics in the field of MIMO system, to offer a snapshot of the recent advances and major issues faced today by the researchers in the MIMO related areas. The book is written by specialists working in universities and research centers all over the world to cover the fundamental principles and main advanced topics on high data rates wireless communications systems over MIMO channels. Moreover, the book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity
Recommended from our members
Exploring Probability Measures with Markov Processes
In many domains where mathematical modelling is applied, a deterministic description of the system at hand is insufficient, and so it is useful to model systems as being in some way stochastic. This is often achieved by modeling the state of the system as being drawn from a probability measure, which is usually given algebraically, i.e. as a formula. While this representation can be useful for deriving certain characteristics of the system, it is by now well-appreciated that many questions about stochastic systems are best-answered by looking at samples from the associated probability measure. In this thesis, we seek to develop and analyse efficient techniques for generating samples from a given probability measure, with a focus on algorithms which simulate a Markov process with the desired invariant measure.
The first work presented in this thesis considers the use of Piecewise-Deterministic Markov Processes (PDMPs) for generating samples. In contrast to usual approaches, PDMPs are i) defined as continuous-time processes, and ii) are typically non-reversible with respect to their invariant measure. These distinctions pose computational and theoretical challenges for the design, analysis, and implementation of PDMP-based samplers. The key contribution of this work is to develop a transparent characterisation of how one can construct a PDMP (within the class of trajectorially-reversible processes) which admits the desired invariant measure, and to offer actionable recommendations on how these processes should be designed in practice.
The second work presented in this thesis considers the task of sampling from a probability measure on a discrete space. While work in recent years has made it possible to apply sampling algorithms to probability measures with differentiable densities on continuous spaces in a reasonably generic way, samplers on discrete spaces are still largely derived on a case-by-case basis. The contention of this work is that this is not necessary, and that one can in fact define quite generally-applicable algorithms which can sample efficiently from discrete probability measures. The contributions are then to propose a small collection of algorithms for this task, and verify their efficiency empirically. Building
on the previous chapter’s work, our samplers are again defined in continuous time and non-reversible, each of which offer noticeable benefits in efficiency.
The third work presented in this thesis concerns a theoretical study of a particular class of Markov Chain-based sampling algorithms which make use of parallel computing resources. The Markov Chains which are produced by this algorithm are mathematically equivalent to a standard Metropolis-Hastings chain, but their real-time convergence properties are affected nontrivially by the application of parallelism. The contribution of this work is to analyse the convergence behaviour of these chains, and to use the ‘optimal scaling’ framework (as developed by Roberts, Rosenthal, and others) to make recommendations concerning the tuning of such algorithms in practice.
The introductory chapters provide a general overview on the task of generating samples from a probability measure, with particular focus on methods involving Markov processes. There is also an interlude on the relative benefits of i) continuous-time and ii) non-reversible Markov processes for sampling, which are intended to provide additional context for the reading of the first two works.PhD Studentship paid for by Cantab Capital Institute for the Mathematics of Informatio
Assessing, testing, and challenging the computational power of quantum devices
Randomness is an intrinsic feature of quantum theory. The outcome of any measurement will be random, sampled from a probability distribution that is defined by the measured quantum state. The task of sampling from a prescribed probability distribution therefore seems to be a natural technological application of quantum devices. And indeed, certain random sampling tasks have been proposed to experimentally demonstrate the speedup of quantum over classical computation, so-called “quantum computational supremacy”.
In the research presented in this thesis, I investigate the complexity-theoretic and physical foundations of quantum sampling algorithms. Using the theory of computational complexity, I assess the computational power of natural quantum simulators and close loopholes in the complexity-theoretic argument for the classical intractability of quantum samplers (Part I). In particular, I prove anticoncentration for quantum circuit families that give rise to a 2-design and review methods for proving average-case hardness. I present quantum random sampling schemes that are tailored to large-scale quantum simulation hardware but at the same time rise up to the highest standard in terms of their complexity-theoretic underpinning. Using methods from property testing and quantum system identification, I shed light on the question, how and under which conditions quantum sampling devices can be tested or verified in regimes that are not simulable on classical computers (Part II). I present a no-go result that prevents efficient verification of quantum random sampling schemes as well as approaches using which this no-go result can be circumvented. In particular, I develop fully efficient verification protocols in what I call the measurement-device-dependent scenario in which single-qubit measurements are assumed to function with high accuracy. Finally, I try to understand the physical mechanisms governing the computational boundary between classical and quantum computing devices by challenging their computational power using tools from computational physics and the theory of computational complexity (Part III). I develop efficiently computable measures of the infamous Monte Carlo sign problem and assess those measures both in terms of their practicability as a tool for alleviating or easing the sign problem and the computational complexity of this task.
An overarching theme of the thesis is the quantum sign problem which arises due to destructive interference between paths – an intrinsically quantum effect. The (non-)existence of a sign problem takes on the role as a criterion which delineates the boundary between classical and quantum computing devices. I begin the thesis by identifying the quantum sign problem as a root of the computational intractability of quantum output probabilities. It turns out that the intricate structure of the probability distributions the sign problem gives rise to, prohibits their verification from few samples. In an ironic twist, I show that assessing the intrinsic sign problem of a quantum system is again an intractable problem
- …