184 research outputs found

    Worst Configurations (Instantons) for Compressed Sensing over Reals: a Channel Coding Approach

    Full text link
    We consider the Linear Programming (LP) solution of the Compressed Sensing (CS) problem over reals, also known as the Basis Pursuit (BasP) algorithm. The BasP allows interpretation as a channel-coding problem, and it guarantees error-free reconstruction with a properly chosen measurement matrix and sufficiently sparse error vectors. In this manuscript, we examine how the BasP performs on a given measurement matrix and develop an algorithm to discover the sparsest vectors for which the BasP fails. The resulting algorithm is a generalization of our previous results on finding the most probable error-patterns degrading performance of a finite size Low-Density Parity-Check (LDPC) code in the error-floor regime. The BasP fails when its output is different from the actual error-pattern. We design a CS-Instanton Search Algorithm (ISA) generating a sparse vector, called a CS-instanton, such that the BasP fails on the CS-instanton, while the BasP recovery is successful for any modification of the CS-instanton replacing a nonzero element by zero. We also prove that, given a sufficiently dense random input for the error-vector, the CS-ISA converges to an instanton in a small finite number of steps. The performance of the CS-ISA is illustrated on a randomly generated 120×512120\times 512 matrix. For this example, the CS-ISA outputs the shortest instanton (error vector) pattern of length 11.Comment: Accepted to be presented at the IEEE International Symposium on Information Theory (ISIT 2010). 5 pages, 2 Figures. Minor edits from previous version. Added a new reference

    Two-Bit Bit Flipping Decoding of LDPC Codes

    Full text link
    In this paper, we propose a new class of bit flipping algorithms for low-density parity-check (LDPC) codes over the binary symmetric channel (BSC). Compared to the regular (parallel or serial) bit flipping algorithms, the proposed algorithms employ one additional bit at a variable node to represent its "strength." The introduction of this additional bit increases the guaranteed error correction capability by a factor of at least 2. An additional bit can also be employed at a check node to capture information which is beneficial to decoding. A framework for failure analysis of the proposed algorithms is described. These algorithms outperform the Gallager A/B algorithm and the min-sum algorithm at much lower complexity. Concatenation of two-bit bit flipping algorithms show a potential to approach the performance of belief propagation (BP) decoding in the error floor region, also at lower complexity.Comment: 6 pages. Submitted to IEEE International Symposium on Information Theory 201

    DIFFERENTIATING TOP-RANKED MALE TENNIS PLAYERS FROM LOWERRANKED PLAYERS USING HAWK-EYE DATA: AN INVESTIGATION OF THE 2012–2014 AUSTRALIAN OPEN TOURNAMENTS

    Get PDF
    The purpose of this study was to differentiate top- and lower-ranked professional tennis players, using Hawk-Eye derived performance metrics. Eighty players competing at the 2012–2014 Australian Open tournaments were assigned to either a top-ranked (n=40) or lower ranked (n=40) group, based on their ATP ranking. Hawk-Eye data from one of each player’s matches were obtained for analysis and compared between groups. Top-ranked players achieved more success on serve (with respect to aces, accuracy and points won) and possessed a faster first serve return, compared with lower-ranked players. Topranked players also played more groundstrokes from behind the baseline, delivered the ball deeper into their opponent’s court, and covered a greater distance during matches. Coaches may be able to use these findings to develop playing style and match tactics

    Errata to ENERGETIC: Final Report

    Get PDF
    During further work on the EXCALiBUR H&ES FPGA Testbed looking at FPGA performance it has come to light that the approach for monitoring the power of an Xilinx U280 FPGA card only looked at the PCI-express power rail and, regrettably, overlooked the AUX power rail. The only data where this is an issue in the original Final Report is regarding SGEMM energy consumption i.e. Figure 6 and its discussion. The correct methodology was applied to the U50 FPGA card (hosted at Newcastle) in all cases. We have developed and applied an alternative approach, which we have confirmed with AMD/Xilinx as to its correctness. Whilst applying this new method in order to correct FPGA energy consumption, we also examined the host CPU energy consumption. We made use of the amd_energy kernel module [1] installed so could read counters directly for FPGA energy consumption and discovered significant difference to the previously published data. Section 2 describes our approaches to measuring energy consumption of FPGA and host CPU during the execution of the SGEMM bitstream. Section 3 gives our updated results (see https://espace.mmu.ac.uk/633613/), with Figure 2 illustrating the observed differences. Section 4 discusses how this new data amends some findings with Sections 5 and 6 giving our conclusions and plans for further work

    On the guaranteed error correction capability of LDPC codes

    Full text link
    We investigate the relation between the girth and the guaranteed error correction capability of γ\gamma-left regular LDPC codes when decoded using the bit flipping (serial and parallel) algorithms. A lower bound on the number of variable nodes which expand by a factor of at least 3γ/43 \gamma/4 is found based on the Moore bound. An upper bound on the guaranteed correction capability is established by studying the sizes of smallest possible trapping sets.Comment: 5 pages, submitted to IEEE International Symposium on Information Theory (ISIT), 200
    • …
    corecore