15 research outputs found

    Efficient SER estimation for MIMO detectors via importance sampling schemes

    Get PDF
    In this paper we propose two importance sampling methods for the efficient symbol error rate (SER) estimation of maximum likelihood (ML) multiple-input multiple-output (MIMO) detectors. Conditioned to a given transmitted symbol, computing the SER requires the evaluation of an integral outside a given polytope in a high-dimensional space, for which a closed-form solution does not exist. Therefore, Monte Carlo (MC) simulation is typically used to estimate the SER, although a naive or raw MC implementation can be very inefficient at high signal-to-noise-ratios or for systems with stringent SER requirements. A reduced variance estimator is provided by the Truncated Hypersphere Importance Sampling (THIS) method, which samples from a proposal density that excludes the largest hypersphere circumscribed within the Voronoi region of the transmitted vector. A much more efficient estimator is provided by the existing ALOE (which stands for "At Least One rare Event") method, which samples conditionally on an error taking place. The paper describes in detail these two IS methods, discussing their advantages and limitations, and comparing their performances.The work of V. Elvira was partially supported by Agence Nationale de la Recherche of France under PISCES project (ANR-17-CE40-0031-01) and the French-American Fulbright Commission. The work of I. Santamaria was partly supported by the Ministerio de Econom´ıa y Competitividad (MINECO) of Spain, and AEI/FEDER funds of the E.U., under grant TEC2016-75067-C4-4-R (CARMEN)

    Efficient recovery algorithm for discrete valued sparse signals using an ADMM approach

    Get PDF
    Motivated by applications in wireless communications, in this paper we propose a reconstruction algorithm for sparse signals whose values are taken from a discrete set, using a limited number of noisy observations. Unlike conventional compressed sensing algorithms, the proposed approach incorporates knowledge of the discrete valued nature of the signal in the detection process. This is accomplished through the alternating direction method of the multipliers which is applied as a heuristic to decompose the associated maximum likelihood detection problem in order to find candidate solutions with a low computational complexity order. Numerical results in different scenarios show that the proposed algorithm is capable of achieving very competitive recovery error rates when compared with other existing suboptimal approaches.info:eu-repo/semantics/publishedVersio

    Modulus Zero-Forcing Detection for MIMO Channels

    Full text link
    We propose a modulus based zero-forcing (MZF) detection for multi-input multi-output (MIMO) channels. Traditionally, a ZF detector nulls out all interferences from other layers when detecting a current layer, which can yield suboptimal detection-performance due to the noise-enhancement issue. In many communication systems, finite alphabets such as M quadrature-amplitude-modulation (QAM) are widely used, which comprises \sqrt{M} pulse-amplitude-modulation (PAM) symbols for the real and imaginary parts. With finite alphabets, one feasible way to improve ZF detection is to allow controllable interferences that can be removed away by modulus operations.Comment: Submitted; 4 figure

    Low-Complexity Voronoi Shaping for the Gaussian Channel

    Get PDF
    Voronoi constellations (VCs) are finite sets of vectors of a coding lattice enclosed by the translated Voronoi region of a shaping lattice, which is a sublattice of the coding lattice. In conventional VCs, the shaping lattice is a scaled-up version of the coding lattice. In this paper, we design low-complexity VCs with a cubic coding lattice of up to 32 dimensions, in which pseudo-Gray labeling is applied to minimize the bit error rate. The designed VCs have considerable shaping gains of up to 1.03 dB and finer choices of spectral efficiencies in practice compared with conventional VCs. A mutual information estimation method and a log-likelihood approximation method based on importance sampling for very large constellations are proposed and applied to the designed VCs. With error-control coding, the proposed VCs can have higher information rates than the conventional scaled VCs because of their inherently good pseudo-Gray labeling feature, with a lower decoding complexity

    Paralelização de algoritmos de enumeração para o problema do vector mais curto em sistemas de memória partilhada e distribuída

    Get PDF
    A criptografia baseada em retículos tem vindo a tornar-se um tópico central ao longo da última década, uma vez que se acredita que este tipo de criptografia seja resistente a ataques infligidos com computadores quânticos. A segurança desta criptografia é medida pela eficácia e praticabilidade dos algoritmos que resolvem problemas centrais em retículos, como o problema do vector mais curto (PVC), e é, por isso, importante determinar qual o desempenho máximo destes algoritmos em arquitecturas computacionais de alto rendimento. Neste sentido, este artigo apresenta, pela primeira vez, um estudo detalhado sobre o desempenho dos dois mais promissores algoritmos de resolução do PVC, o ENUM e uma variante eficiente da enumeração de Schnorr-Euchner, com e sem poda extrema. Em particular, são propostas versões paralelas destes algoritmos, desenvolvidas para óptimo balanço de carga e, consequentemente, melhor desempenho. Conduziu-se uma extensa série de testes, quer em memória partilhada, para as variantes sem poda, quer em memória distribuída, para as variantes com poda. Os resultados mostram que as implementações em memória partilhada atingem, em certos casos, acelerações lineares até 16 \textit{threads}. As implementações em memória distribuída, por seu turno, são aceleradas em cerca de 13 vezes para 16 processos, permitindo a resolução do PVC em retículos em dimensão 80 em menos de 250 segundos.Fundação para a Ciência e a Tecnologia (FCT

    Parallel improved Schnorr-Euchner enumeration SE++ on shared and distributed memory systems, with and without extreme pruning

    Get PDF
    The security of lattice-based cryptography relies on the hardness of problems based on lattices, such as the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP). This paper presents two parallel implementations for the SE++ with and without extreme pruning. The SE++ is an enumeration-based CVP-solver, which can be easily adapted to solve the SVP. We improved the SVP version of the SE++ with an optimization that avoids symmetric branches, improving its performance by a factor of ≈ 50%, and applied the extreme pruning technique to this improved version. The extreme pruning technique is the fastest way to compute the SVP with enumeration known to date. It solves the SVP for lattices in much higher dimensions in less time than implementations without extreme pruning. Our parallel implementation of the SE++ with extreme pruning targets distributed memory multi-core CPU systems, while our SE++ without extreme pruning is designed for shared memory multi-core CPU systems. These implementations address load balancing problems for optimal performance, with a master-slave mechanism on the distributed memory implementation, and specific bounds for task creation on the shared memory implementation. The parallel implementation for the SE++ without extreme pruning scales linearly for up to 8 threads and almost linearly for 16 threads. In addition, it also achieves super-linear speedups on some instances, as the workload may be shortened, since some threads may find shorter vectors at earlier points in time, compared to the sequential implementation. Tests with our Improved SE++ implementation showed that it outperforms the state of the art implementation by a factor of between 35% and 60%, while maintaining a scalability similar to the SE++ implementation. Our parallel implementation of the SE++ with extreme pruning achieves linear speedups for up to 8 (working) processes and speedups of up to 13x for 16 (working) processes(undefined)info:eu-repo/semantics/publishedVersio

    Multiple Importance Sampling for Symbol Error Rate Estimation of Maximum-Likelihood Detectors in MIMO Channels

    Get PDF
    In this paper we propose a multiple importance sampling (MIS) method for the efficient symbol error rate (SER) estimation of maximum likelihood (ML) multiple-input multiple-output (MIMO) detectors. Given a transmitted symbol from the input lattice, obtaining the SER requires the computation of an integral outside its Voronoi region in a high-dimensional space, for which a closed-form solution does not exist. Hence, the SER must be approximated through crude or naive Monte Carlo (MC) simulations. This practice is widely used in the literature despite its inefficiency, particularly severe at high signal-to-noise-ratio (SNR) or for systems with stringent SER requirements. It is well-known that more sophisticated MC-based techniques such as MIS, when carefully designed, can reduce the variance of the estimators in several orders of magnitude with respect to naive Monte Carlo in rare-event estimation, or equivalently, they need significantly less samples for attaining a desired performance. The proposed MIS method provides unbiased SER estimates by sampling from a mixture of components that are carefully chosen and parametrized. The number of components, the parameters of the components, and their weights in the mixture, are automatically chosen by the proposed method. As a result, the proposed method is flexible, easy-to-use, theoretically sound, and presents a high performance in a variety of scenarios. We show in our simulations that SERs lower than 10?8 can be accurately estimated with just 104 random samples.The work of V. Elvira was supported by the Agence Nationale de la Recherche of France under PISCES project (ANR-17-CE40-0031-01). The work of I. Santamaria was supported by Ministerio de Ciencia, Innovación y Universidades and AEI/FEDER funds of the E.U., under grant PID2019-104958RB-C43 (ADELE). A short preliminary version of this paper was presented at the 2019 Asilomar Conference on Signals, Systems, and Computers
    corecore