231 research outputs found

    Proceedings of the 2021 Symposium on Information Theory and Signal Processing in the Benelux, May 20-21, TU Eindhoven

    Get PDF

    Identification through Finger Bone Structure Biometrics

    Get PDF

    Analysis and Design of Algorithms for the Improvement of Non-coherent Massive MIMO based on DMPSK for beyond 5G systems

    Get PDF
    Mención Internacional en el título de doctorNowadays, it is nearly impossible to think of a service that does not rely on wireless communications. By the end of 2022, mobile internet represented a 60% of the total global online traffic. There is an increasing trend both in the number of subscribers and in the traffic handled by each subscriber. Larger data rates, smaller extreme-to-extreme (E2E) delays and greater number of devices are current interests for the development of mobile communications. Furthermore, it is foreseen that these demands should also be fulfilled in scenarios with stringent conditions, such as very fast varying wireless communications channels (either in time or frequency) or scenarios with power constraints, mainly found when the equipment is battery powered. Since most of the wireless communications techniques and standards rely on the fact that the wireless channel is somehow characterized or estimated to be pre or post-compensated in transmission (TX) or reception (RX), there is a clear problem when the channels vary rapidly or the available power is constrained. To estimate the wireless channel and obtain the so-called channel state information (CSI), some of the available resources (either in time, frequency or any other dimension), are utilized by including known signals in the TX and RX typically known as pilots, thus avoiding their use for data transmission. If the channels vary rapidly, they must be estimated many times, which results in a very low data efficiency of the communications link. Also, in case the power is limited or the wireless link distance is large, the resulting signal-tointerference- plus-noise ratio (SINR) will be low, which is a parameter that is directly related to the quality of the channel estimation and the performance of the data reception. This problem is aggravated in massive multiple-input multiple-output (massive MIMO), which is a promising technique for future wireless communications since it can increase the data rates, increase the reliability and cope with a larger number of simultaneous devices. In massive MIMO, the base station (BS) is typically equipped with a large number of antennas that are coordinated. In these scenarios, the channels must be estimated for each antenna (or at least for each user), and thus, the aforementioned problem of channel estimation aggravates. In this context, algorithms and techniques for massive MIMO without CSI are of interest. This thesis main topic is non-coherent massive multiple-input multiple-output (NC-mMIMO) which relies on the use of differential M-ary phase shift keying (DMPSK) and the spatial diversity of the antenna arrays to be able to detect the useful transmitted data without CSI knowledge. On the one hand, hybrid schemes that combine the coherent and non-coherent schemes allowing to get the best of both worlds are proposed. These schemes are based on distributing the resources between non-coherent (NC) and coherent data, utilizing the NC data to estimate the channel without using pilots and use the estimated channel for the coherent data. On the other hand, new constellations and user allocation strategies for the multi-user scenario of NC-mMIMO are proposed. The new constellations are better than the ones in the literature and obtained using artificial intelligence techniques, more concretely evolutionary computation.This work has received funding from the European Union Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie ETN TeamUp5G, grant agreement No. 813391. The PhD student was the Early Stage Researcher (ESR) number 2 of the project. This work has also received funding from the Spanish National Project IRENE-EARTH (PID2020-115323RB-C33) (MINECO/AEI/FEDER, UE), which funded the work of some coauthors.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidente: Luis Castedo Ribas.- Secretario: Matilde Pilar Sánchez Fernández.- Vocal: Eva Lagunas Targaron

    Finger Vein Verification with a Convolutional Auto-encoder

    Get PDF

    MIMO Systems

    Get PDF
    In recent years, it was realized that the MIMO communication systems seems to be inevitable in accelerated evolution of high data rates applications due to their potential to dramatically increase the spectral efficiency and simultaneously sending individual information to the corresponding users in wireless systems. This book, intends to provide highlights of the current research topics in the field of MIMO system, to offer a snapshot of the recent advances and major issues faced today by the researchers in the MIMO related areas. The book is written by specialists working in universities and research centers all over the world to cover the fundamental principles and main advanced topics on high data rates wireless communications systems over MIMO channels. Moreover, the book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Multiresolution image models and estimation techniques

    Get PDF

    Assessing, testing, and challenging the computational power of quantum devices

    Get PDF
    Randomness is an intrinsic feature of quantum theory. The outcome of any measurement will be random, sampled from a probability distribution that is defined by the measured quantum state. The task of sampling from a prescribed probability distribution therefore seems to be a natural technological application of quantum devices. And indeed, certain random sampling tasks have been proposed to experimentally demonstrate the speedup of quantum over classical computation, so-called “quantum computational supremacy”. In the research presented in this thesis, I investigate the complexity-theoretic and physical foundations of quantum sampling algorithms. Using the theory of computational complexity, I assess the computational power of natural quantum simulators and close loopholes in the complexity-theoretic argument for the classical intractability of quantum samplers (Part I). In particular, I prove anticoncentration for quantum circuit families that give rise to a 2-design and review methods for proving average-case hardness. I present quantum random sampling schemes that are tailored to large-scale quantum simulation hardware but at the same time rise up to the highest standard in terms of their complexity-theoretic underpinning. Using methods from property testing and quantum system identification, I shed light on the question, how and under which conditions quantum sampling devices can be tested or verified in regimes that are not simulable on classical computers (Part II). I present a no-go result that prevents efficient verification of quantum random sampling schemes as well as approaches using which this no-go result can be circumvented. In particular, I develop fully efficient verification protocols in what I call the measurement-device-dependent scenario in which single-qubit measurements are assumed to function with high accuracy. Finally, I try to understand the physical mechanisms governing the computational boundary between classical and quantum computing devices by challenging their computational power using tools from computational physics and the theory of computational complexity (Part III). I develop efficiently computable measures of the infamous Monte Carlo sign problem and assess those measures both in terms of their practicability as a tool for alleviating or easing the sign problem and the computational complexity of this task. An overarching theme of the thesis is the quantum sign problem which arises due to destructive interference between paths – an intrinsically quantum effect. The (non-)existence of a sign problem takes on the role as a criterion which delineates the boundary between classical and quantum computing devices. I begin the thesis by identifying the quantum sign problem as a root of the computational intractability of quantum output probabilities. It turns out that the intricate structure of the probability distributions the sign problem gives rise to, prohibits their verification from few samples. In an ironic twist, I show that assessing the intrinsic sign problem of a quantum system is again an intractable problem
    corecore