426 research outputs found

    Perturbation Analysis of the QR Factor R in the Context of LLL Lattice Basis Reduction

    Get PDF
    ... \ud computable notion of reduction of basis of a Euclidean lattice that is now commonly referred to as LLLreduction. The precise definition involves the R-factor of the QR factorisation of the basis matrix. A natural mean of speeding up the LLL reduction algorithm is to use a (floating-point) approximation to the R-factor. In the present article, we investigate the accuracy of the factor R of the QR factorisation of an LLL-reduced basis. The results we obtain should be very useful to devise LLL-type algorithms relying on floating-point approximations

    On the Proximity Factors of Lattice Reduction-Aided Decoding

    Full text link
    Lattice reduction-aided decoding features reduced decoding complexity and near-optimum performance in multi-input multi-output communications. In this paper, a quantitative analysis of lattice reduction-aided decoding is presented. To this aim, the proximity factors are defined to measure the worst-case losses in distances relative to closest point search (in an infinite lattice). Upper bounds on the proximity factors are derived, which are functions of the dimension nn of the lattice alone. The study is then extended to the dual-basis reduction. It is found that the bounds for dual basis reduction may be smaller. Reasonably good bounds are derived in many cases. The constant bounds on proximity factors not only imply the same diversity order in fading channels, but also relate the error probabilities of (infinite) lattice decoding and lattice reduction-aided decoding.Comment: remove redundant figure

    Novel Efficient Precoding Techniques for Multiuser MIMO Systems

    Get PDF
    In Multiuser MIMO (MU-MIMO) systems, precoding is essential to eliminate or minimize the multiuser interference (MUI). However, the design of a suitable precoding algorithm with good overall performance and low computational complexity at the same time is quite challenging, especially with the increase of system dimensions. In this thesis, we explore the art of novel low-complexity high-performance precoding algorithms with both linear and non-linear processing strategies. Block diagonalization (BD)-type based precoding techniques are well-known linear precoding strategies for MU-MIMO systems. By employing BD-type precoding algorithms at the transmit side, the MU-MIMO broadcast channel is decomposed into multiple independent parallel SU-MIMO channels and achieves the maximum diversity order at high data rates. The main computational complexity of BD-type precoding algorithms comes from two singular value decomposition (SVD) operations, which depend on the number of users and the dimensions of each user's channel matrix. In this thesis, two categories of low-complexity precoding algorithms are proposed to reduce the computational complexity and improve the performance of BD-type precoding algorithms. One is based on multiple LQ decompositions and lattice reductions. The other one is based on a channel inversion technique, QR decompositions, and lattice reductions to decouple the MU-MIMO channel into equivalent SU-MIMO channels. Both of the two proposed precoding algorithms can achieve a comparable sum-rate performance as BD-type precoding algorithms, substantial bit error rate (BER) performance gains, and a simplified receiver structure, while requiring a much lower complexity. Tomlinson-Harashima precoding (THP) is a prominent nonlinear processing technique employed at the transmit side and is a dual to the successive interference cancelation (SIC) detection at the receive side. Like SIC detection, the performance of THP strongly depends on the ordering of the precoded symbols. The optimal ordering algorithm, however, is impractical for MU-MIMO systems with multiple receive antennas. We propose a multi-branch THP (MB-THP) scheme and algorithms that employ multiple transmit processing and ordering strategies along with a selection scheme to mitigate interference in MU-MIMO systems. Two types of multi-branch THP (MB-THP) structures are proposed. The first one employs a decentralized strategy with diagonal weighted filters at the receivers of the users and the second uses a diagonal weighted filter at the transmitter. The MB-MMSE-THP algorithms are also derived based on an extended system model with the aid of an LQ decomposition, which is much simpler compared to the conventional MMSE-THP algorithms. Simulation results show that a better BER performance can be achieved by the proposed MB-MMSE-THP precoder with a small computational complexity increase

    Integer-ambiguity resolution in astronomy and geodesy

    Full text link
    Recent theoretical developments in astronomical aperture synthesis have revealed the existence of integer-ambiguity problems. Those problems, which appear in the self-calibration procedures of radio imaging, have been shown to be similar to the nearest-lattice point (NLP) problems encountered in high-precision geodetic positioning, and in global navigation satellite systems. In this paper, we analyse the theoretical aspects of the matter and propose new methods for solving those NLP problems. The related optimization aspects concern both the preconditioning stage, and the discrete-search stage in which the integer ambiguities are finally fixed. Our algorithms, which are described in an explicit manner, can easily be implemented. They lead to substantial gains in the processing time of both stages. Their efficiency was shown via intensive numerical tests.Comment: 12 pages. Soumis et accept\'e pour publication dans "Astronomische Nachrichten

    Design and Implementation of Efficient Algorithms for Wireless MIMO Communication Systems

    Full text link
    En la última década, uno de los avances tecnológicos más importantes que han hecho culminar la nueva generación de banda ancha inalámbrica es la comunicación mediante sistemas de múltiples entradas y múltiples salidas (MIMO). Las tecnologías MIMO han sido adoptadas por muchos estándares inalámbricos tales como LTE, WiMAS y WLAN. Esto se debe principalmente a su capacidad de aumentar la máxima velocidad de transmisión , junto con la fiabilidad alcanzada y la cobertura de las comunicaciones inalámbricas actuales sin la necesidad de ancho de banda extra ni de potencia de transmisión adicional. Sin embargo, las ventajas proporcionadas por los sistemas MIMO se producen a expensas de un aumento sustancial del coste de implementación de múltiples antenas y de la complejidad del receptor, la cual tiene un gran impacto sobre el consumo de energía. Por esta razón, el diseño de receptores de baja complejidad es un tema importante que se abordará a lo largo de esta tesis. En primer lugar, se investiga el uso de técnicas de preprocesado de la matriz de canal MIMO bien para disminuir el coste computacional de decodificadores óptimos o bien para mejorar las prestaciones de detectores subóptimos lineales, SIC o de búsqueda en árbol. Se presenta una descripción detallada de dos técnicas de preprocesado ampliamente utilizadas: el método de Lenstra, Lenstra, Lovasz (LLL) para lattice reduction (LR) y el algorimo VBLAST ZF-DFE. Tanto la complejidad como las prestaciones de ambos métodos se han evaluado y comparado entre sí. Además, se propone una implementación de bajo coste del algoritmo VBLAST ZF-DFE, la cual se incluye en la evaluación. En segundo lugar, se ha desarrollado un detector MIMO basado en búsqueda en árbol de baja complejidad, denominado detector K-Best de amplitud variable (VB K-Best). La idea principal de este método es aprovechar el impacto del número de condición de la matriz de canal sobre la detección de datos con el fin de disminuir la complejidad de los sistemasRoger Varea, S. (2012). Design and Implementation of Efficient Algorithms for Wireless MIMO Communication Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/16562Palanci

    Recent progress in linear algebra and lattice basis reduction (invited)

    Get PDF
    International audienceA general goal concerning fundamental linear algebra problems is to reduce the complexity estimates to essentially the same as that of multiplying two matrices (plus possibly a cost related to the input and output sizes). Among the bottlenecks one usually finds the questions of designing a recursive approach and mastering the sizes of the intermediately computed data. In this talk we are interested in two special cases of lattice basis reduction. We consider bases given by square matrices over K[x] or Z, with, respectively, the notion of reduced form and LLL reduction. Our purpose is to introduce basic tools for understanding how to generalize the Lehmer and Knuth-Schönhage gcd algorithms for basis reduction. Over K[x] this generalization is a key ingredient for giving a basis reduction algorithm whose complexity estimate is essentially that of multiplying two polynomial matrices. Such a problem relation between integer basis reduction and integer matrix multiplication is not known. The topic receives a lot of attention, and recent results on the subject show that there might be room for progressing on the question

    Fast Practical Lattice Reduction through Iterated Compression

    Get PDF
    We introduce a new lattice basis reduction algorithm with approximation guarantees analogous to the LLL algorithm and practical performance that far exceeds the current state of the art. We achieve these results by iteratively applying precision management techniques within a recursive algorithm structure and show the stability of this approach. We analyze the asymptotic behavior of our algorithm, and show that the heuristic running time is O(nω(C+n)1+ε)O(n^{\omega}(C+n)^{1+\varepsilon}) for lattices of dimension nn, ω(2,3]\omega\in (2,3] bounding the cost of size reduction, matrix multiplication, and QR factorization, and CC bounding the log of the condition number of the input basis BB. This yields a running time of O(nω(p+n)1+ε)O\left(n^\omega (p + n)^{1 + \varepsilon}\right) for precision p=O(logBmax)p = O(\log \|B\|_{max}) in common applications. Our algorithm is fully practical, and we have published our implementation. We experimentally validate our heuristic, give extensive benchmarks against numerous classes of cryptographic lattices, and show that our algorithm significantly outperforms existing implementations

    Systematics of Aligned Axions

    Full text link
    We describe a novel technique that renders theories of NN axions tractable, and more generally can be used to efficiently analyze a large class of periodic potentials of arbitrary dimension. Such potentials are complex energy landscapes with a number of local minima that scales as N!\sqrt{N!}, and so for large NN appear to be analytically and numerically intractable. Our method is based on uncovering a set of approximate symmetries that exist in addition to the NN periods. These approximate symmetries, which are exponentially close to exact, allow us to locate the minima very efficiently and accurately and to analyze other characteristics of the potential. We apply our framework to evaluate the diameters of flat regions suitable for slow-roll inflation, which unifies, corrects and extends several forms of "axion alignment" previously observed in the literature. We find that in a broad class of random theories, the potential is smooth over diameters enhanced by N3/2N^{3/2} compared to the typical scale of the potential. A Mathematica implementation of our framework is available online.Comment: 68 pages, 17 figure
    corecore