10,842 research outputs found

    Efficient Integer Coefficient Search for Compute-and-Forward

    Full text link
    Integer coefficient selection is an important decoding step in the implementation of compute-and-forward (C-F) relaying scheme. Choosing the optimal integer coefficients in C-F has been shown to be a shortest vector problem (SVP) which is known to be NP hard in its general form. Exhaustive search of the integer coefficients is only feasible in complexity for small number of users while approximation algorithms such as Lenstra-Lenstra-Lovasz (LLL) lattice reduction algorithm only find a vector within an exponential factor of the shortest vector. An optimal deterministic algorithm was proposed for C-F by Sahraei and Gastpar specifically for the real valued channel case. In this paper, we adapt their idea to the complex valued channel and propose an efficient search algorithm to find the optimal integer coefficient vectors over the ring of Gaussian integers and the ring of Eisenstein integers. A second algorithm is then proposed that generalises our search algorithm to the Integer-Forcing MIMO C-F receiver. Performance and efficiency of the proposed algorithms are evaluated through simulations and theoretical analysis.Comment: IEEE Transactions on Wireless Communications, to appear.12 pages, 8 figure

    Incremental and Transitive Discrete Rotations

    Get PDF
    A discrete rotation algorithm can be apprehended as a parametric application f_αf\_\alpha from \ZZ[i] to \ZZ[i], whose resulting permutation ``looks like'' the map induced by an Euclidean rotation. For this kind of algorithm, to be incremental means to compute successively all the intermediate rotate d copies of an image for angles in-between 0 and a destination angle. The di scretized rotation consists in the composition of an Euclidean rotation with a discretization; the aim of this article is to describe an algorithm whic h computes incrementally a discretized rotation. The suggested method uses o nly integer arithmetic and does not compute any sine nor any cosine. More pr ecisely, its design relies on the analysis of the discretized rotation as a step function: the precise description of the discontinuities turns to be th e key ingredient that will make the resulting procedure optimally fast and e xact. A complete description of the incremental rotation process is provided, also this result may be useful in the specification of a consistent set of defin itions for discrete geometry

    Fast simulation of Gaussian random fields

    Full text link
    Fast Fourier transforms are used to develop algorithms for the fast generation of correlated Gaussian random fields on d-dimensional rectangular regions. The complexities of the algorithms are derived, simulation results and error analysis are presented.Comment: 15 pages, 8 figures. Typos corrected in Algorithm 3, Remark (4), Algorithm 4, Remark (5), and Algorithm 5, Remark (5

    An Open Source C++ Implementation of Multi-Threaded Gaussian Mixture Models, k-Means and Expectation Maximisation

    Get PDF
    Modelling of multivariate densities is a core component in many signal processing, pattern recognition and machine learning applications. The modelling is often done via Gaussian mixture models (GMMs), which use computationally expensive and potentially unstable training algorithms. We provide an overview of a fast and robust implementation of GMMs in the C++ language, employing multi-threaded versions of the Expectation Maximisation (EM) and k-means training algorithms. Multi-threading is achieved through reformulation of the EM and k-means algorithms into a MapReduce-like framework. Furthermore, the implementation uses several techniques to improve numerical stability and modelling accuracy. We demonstrate that the multi-threaded implementation achieves a speedup of an order of magnitude on a recent 16 core machine, and that it can achieve higher modelling accuracy than a previously well-established publically accessible implementation. The multi-threaded implementation is included as a user-friendly class in recent releases of the open source Armadillo C++ linear algebra library. The library is provided under the permissive Apache~2.0 license, allowing unencumbered use in commercial products

    Bolt: Accelerated Data Mining with Fast Vector Compression

    Full text link
    Vectors of data are at the heart of machine learning and data mining. Recently, vector quantization methods have shown great promise in reducing both the time and space costs of operating on vectors. We introduce a vector quantization algorithm that can compress vectors over 12x faster than existing techniques while also accelerating approximate vector operations such as distance and dot product computations by up to 10x. Because it can encode over 2GB of vectors per second, it makes vector quantization cheap enough to employ in many more circumstances. For example, using our technique to compute approximate dot products in a nested loop can multiply matrices faster than a state-of-the-art BLAS implementation, even when our algorithm must first compress the matrices. In addition to showing the above speedups, we demonstrate that our approach can accelerate nearest neighbor search and maximum inner product search by over 100x compared to floating point operations and up to 10x compared to other vector quantization methods. Our approximate Euclidean distance and dot product computations are not only faster than those of related algorithms with slower encodings, but also faster than Hamming distance computations, which have direct hardware support on the tested platforms. We also assess the errors of our algorithm's approximate distances and dot products, and find that it is competitive with existing, slower vector quantization algorithms.Comment: Research track paper at KDD 201

    Bound-intersection detection for multiple-symbol differential unitary space-time modulation

    Get PDF
    This paper considers multiple-symbol differential detection (MSD) of differential unitary space-time modulation (DUSTM) over multiple-antenna systems. We derive a novel exact maximum-likelihood (ML) detector, called the bound-intersection detector (BID), using the extended Euclidean algorithm for single-symbol detection of diagonal constellations. While the ML search complexity is exponential in the number of transmit antennas and the data rate, our algorithm, particularly in high signal-to-noise ratio, achieves significant computational savings over the naive ML algorithm and the previous detector based on lattice reduction. We also develop four BID variants for MSD. The first two are ML and use branch-and-bound, the third one is suboptimal, which first uses BID to generate a candidate subset and then exhaustively searches over the reduced space, and the last one generalizes decision-feedback differential detection. Simulation results show that the BID and its MSD variants perform nearly ML, but do so with significantly reduced complexity

    A local limit theorem with speed of convergence for Euclidean algorithms and diophantine costs

    Get PDF
    For large NN, we consider the ordinary continued fraction of x=p/qx=p/q with 1pqN1\le p\le q\le N, or, equivalently, Euclid's gcd algorithm for two integers 1pqN1\le p\le q\le N, putting the uniform distribution on the set of pp and qqs. We study the distribution of the total cost of execution of the algorithm for an additive cost function cc on the set Z+\mathbb{Z}_+^* of possible digits, asymptotically for NN\to\infty. If cc is nonlattice and satisfies mild growth conditions, the local limit theorem was proved previously by the second named author. Introducing diophantine conditions on the cost, we are able to control the speed of convergence in the local limit theorem. We use previous estimates of the first author and Vall\'{e}e, and we adapt to our setting bounds of Dolgopyat and Melbourne on transfer operators. Our diophantine condition is generic (with respect to Lebesgue measure). For smooth enough observables (depending on the diophantine condition) we attain the optimal speed.Comment: Published in at http://dx.doi.org/10.1214/07-AIHP140 the Annales de l'Institut Henri Poincar\'e - Probabilit\'es et Statistiques (http://www.imstat.org/aihp/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore