1,283 research outputs found

    MASSIVE PARALLEL DECODING OF LOW-DENSITY PARITY-CHECK CODES USING GRAPHIC CARDS

    Full text link
    The belief propagation decoder for LDPS codes is ported to CUDATM and optimized for a high amount of parallel computation. The resulting implementation shall be compared with a non-parallel version on state-of-the-art PCs.Monzó Solves, E. (2010). MASSIVE PARALLEL DECODING OF LOW-DENSITY PARITY-CHECK CODES USING GRAPHIC CARDS. Universitat Politècnica de València. http://hdl.handle.net/10251/1373

    Generating and Searching Families of FFT Algorithms

    Full text link
    A fundamental question of longstanding theoretical interest is to prove the lowest exact count of real additions and multiplications required to compute a power-of-two discrete Fourier transform (DFT). For 35 years the split-radix algorithm held the record by requiring just 4n log n - 6n + 8 arithmetic operations on real numbers for a size-n DFT, and was widely believed to be the best possible. Recent work by Van Buskirk et al. demonstrated improvements to the split-radix operation count by using multiplier coefficients or "twiddle factors" that are not n-th roots of unity for a size-n DFT. This paper presents a Boolean Satisfiability-based proof of the lowest operation count for certain classes of DFT algorithms. First, we present a novel way to choose new yet valid twiddle factors for the nodes in flowgraphs generated by common power-of-two fast Fourier transform algorithms, FFTs. With this new technique, we can generate a large family of FFTs realizable by a fixed flowgraph. This solution space of FFTs is cast as a Boolean Satisfiability problem, and a modern Satisfiability Modulo Theory solver is applied to search for FFTs requiring the fewest arithmetic operations. Surprisingly, we find that there are FFTs requiring fewer operations than the split-radix even when all twiddle factors are n-th roots of unity.Comment: Preprint submitted on March 28, 2011, to the Journal on Satisfiability, Boolean Modeling and Computatio

    Hexagonal structure for intelligent vision

    Full text link
    Using hexagonal grids to represent digital images have been studied for more than 40 years. Increased processing capabilities of graphic devices and recent improvements in CCD technology have made hexagonal sampling attractive for practical applications and brought new interests on this topic. The hexagonal structure is considered to be preferable to the rectangular structure due to its higher sampling efficiency, consistent connectivity and higher angular resolution and is even proved to be superior to square structure in many applications. Since there is no mature hardware for hexagonal-based image capture and display, square to hexagonal image conversion has to be done before hexagonal-based image processing. Although hexagonal image representation and storage has not yet come to a standard, experiments based on existing hexagonal coordinate systems have never ceased. In this paper, we firstly introduced general reasons that hexagonally sampled images are chosen for research. Then, typical hexagonal coordinates and addressing schemes, as well as hexagonal based image processing and applications, are fully reviewed. © 2005 IEEE

    Superposition frames for adaptive time-frequency analysis and fast reconstruction

    Full text link
    In this article we introduce a broad family of adaptive, linear time-frequency representations termed superposition frames, and show that they admit desirable fast overlap-add reconstruction properties akin to standard short-time Fourier techniques. This approach stands in contrast to many adaptive time-frequency representations in the extant literature, which, while more flexible than standard fixed-resolution approaches, typically fail to provide efficient reconstruction and often lack the regular structure necessary for precise frame-theoretic analysis. Our main technical contributions come through the development of properties which ensure that this construction provides for a numerically stable, invertible signal representation. Our primary algorithmic contributions come via the introduction and discussion of specific signal adaptation criteria in deterministic and stochastic settings, based respectively on time-frequency concentration and nonstationarity detection. We conclude with a short speech enhancement example that serves to highlight potential applications of our approach.Comment: 16 pages, 6 figures; revised versio
    corecore