748 research outputs found
Probabilistic Shaping for Finite Blocklengths: Distribution Matching and Sphere Shaping
In this paper, we provide for the first time a systematic comparison of
distribution matching (DM) and sphere shaping (SpSh) algorithms for short
blocklength probabilistic amplitude shaping. For asymptotically large
blocklengths, constant composition distribution matching (CCDM) is known to
generate the target capacity-achieving distribution. As the blocklength
decreases, however, the resulting rate loss diminishes the efficiency of CCDM.
We claim that for such short blocklengths and over the additive white Gaussian
channel (AWGN), the objective of shaping should be reformulated as obtaining
the most energy-efficient signal space for a given rate (rather than matching
distributions). In light of this interpretation, multiset-partition DM (MPDM),
enumerative sphere shaping (ESS) and shell mapping (SM), are reviewed as
energy-efficient shaping techniques. Numerical results show that MPDM and SpSh
have smaller rate losses than CCDM. SpSh--whose sole objective is to maximize
the energy efficiency--is shown to have the minimum rate loss amongst all. We
provide simulation results of the end-to-end decoding performance showing that
up to 1 dB improvement in power efficiency over uniform signaling can be
obtained with MPDM and SpSh at blocklengths around 200. Finally, we present a
discussion on the complexity of these algorithms from the perspective of
latency, storage and computations.Comment: 18 pages, 10 figure
FlexCore: Massively Parallel and Flexible Processing for Large MIMO Access Points
Large MIMO base stations remain among wireless network designers’ best tools for increasing wireless throughput while serving many clients, but current system designs, sacrifice throughput with simple linear MIMO detection algorithms. Higher-performance detection techniques are known, but remain off the table because these systems parallelize their computation at the level of a whole OFDM subcarrier, sufficing only for the less demanding linear detection approaches they opt for. This paper presents FlexCore, the first computational architecture capable of parallelizing the detection of large numbers of mutually-interfering information streams at a granularity below individual OFDM subcarriers, in a nearly-embarrassingly parallel manner while utilizing any number of available processing elements. For 12 clients sending 64-QAM symbols to a 12-antenna base station, our WARP testbed evaluation shows similar network throughput to the state-of-the-art while using an order of magnitude fewer processing elements. For the same scenario, our combined WARP-GPU testbed evaluation demonstrates a 19x computational speedup, with 97% increased energy efficiency when compared with the state of the art. Finally, for the same scenario, an FPGA-based comparison between FlexCore and the state of the art shows that FlexCore can achieve up to 96% better energy efficiency, and can offer up to 32x the processing throughput
Application of Multi-core and GPU Architectures on Signal Processing: Case Studies
In this article part of the techniques and developments we are carrying out within the INCO2 group are reported. Results follow the interdisciplinary approach with which we tackle signal processing applications. Chosen case studies show different stages of development: We present algorithms already completed which are being used in practical applications as well as new ideas that may represent a starting point, and which are expected to deliver good results in a short and medium term
Advanced Wireless Digital Baseband Signal Processing Beyond 100 Gbit/s
International audienceThe continuing trend towards higher data rates in wireless communication systems will, in addition to a higher spectral efficiency and lowest signal processing latencies, lead to throughput requirements for the digital baseband signal processing beyond 100 Gbit/s, which is at least one order of magnitude higher than the tens of Gbit/s targeted in the 5G standardization. At the same time, advances in silicon technology due to shrinking feature sizes and increased performance parameters alone won't provide the necessary gain, especially in energy efficiency for wireless transceivers, which have tightly constrained power and energy budgets. In this paper, we highlight the challenges for wireless digital baseband signal processing beyond 100 Gbit/s and the limitations of today's architectures. Our focus lies on the channel decoding and MIMO detection, which are major sources of complexity in digital baseband signal processing. We discuss techniques on algorithmic and architectural level, which aim to close this gap. For the first time we show Turbo-Code decoding techniques towards 100 Gbit/s and a complete MIMO receiver beyond 100 Gbit/s in 28 nm technology
Parallel signal detection for generalized spatial modulation MIMO systems
[EN] Generalized Spatial Modulation is a recently developed technique that is designed to enhance the efficiency of transmissions in MIMO Systems. However, the procedure for correctly retrieving the sent signal at the receiving end is quite demanding. Specifically, the computation of the maximum likelihood solution is computationally very expensive. In this paper, we propose a parallel method for the computation of the maximum likelihood solution using the parallel computing library OpenMP. The proposed parallel algorithm computes the maximum likelihood solution faster than the sequential version, and substantially reduces the worst-case computing times.This work has been partially supported by the Spanish Ministry of Science, Innovation and Universities and by the European Union through grant RTI2018- 098085-BC41 (MCUI/AEI/FEDER), by GVA through PROMETEO/2019/109, and by RED 2018-102668-T.
Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.GarcĂa Mollá, VM.; Simarro, MA.; MartĂnez ZaldĂvar, FJ.; Boratto, M.; Alonso-Jordá, P.; Gonzalez, A. (2022). Parallel signal detection for generalized spatial modulation MIMO systems. The Journal of Supercomputing. 78(5):7059-7077. https://doi.org/10.1007/s11227-021-04163-y7059707778
Optimal Quantum Measurements of Expectation Values of Observables
Experimental characterizations of a quantum system involve the measurement of
expectation values of observables for a preparable state |psi> of the quantum
system. Such expectation values can be measured by repeatedly preparing |psi>
and coupling the system to an apparatus. For this method, the precision of the
measured value scales as 1/sqrt(N) for N repetitions of the experiment. For the
problem of estimating the parameter phi in an evolution exp(-i phi H), it is
possible to achieve precision 1/N (the quantum metrology limit) provided that
sufficient information about H and its spectrum is available. We consider the
more general problem of estimating expectations of operators A with minimal
prior knowledge of A. We give explicit algorithms that approach precision 1/N
given a bound on the eigenvalues of A or on their tail distribution. These
algorithms are particularly useful for simulating quantum systems on quantum
computers because they enable efficient measurement of observables and
correlation functions. Our algorithms are based on a method for efficiently
measuring the complex overlap of |psi> and U|psi>, where U is an implementable
unitary operator. We explicitly consider the issue of confidence levels in
measuring observables and overlaps and show that, as expected, confidence
levels can be improved exponentially with linear overhead. We further show that
the algorithms given here can typically be parallelized with minimal increase
in resource usage.Comment: 22 page
- …