4,517 research outputs found

    GPGPU for Difficult Black-box Problems

    Get PDF
    AbstractDifficult black-box problems arise in many scientific and industrial areas. In this paper, efficient use of a hardware accelerator to implement dedicated solvers for such problems is discussed and studied based on an example of Golomb Ruler problem. The actual solution of the problem is shown based on evolutionary and memetic algorithms accelerated on GPGPU. The presented results prove that GPGPU outperforms CPU in some memetic algorithms which can be used as a part of hybrid algorithm of finding near optimal solutions of Golomb Ruler problem. The presented research is a part of building heterogenous parallel algorithm for difficult black-box Golomb Ruler problem

    Optimized Surface Code Communication in Superconducting Quantum Computers

    Full text link
    Quantum computing (QC) is at the cusp of a revolution. Machines with 100 quantum bits (qubits) are anticipated to be operational by 2020 [googlemachine,gambetta2015building], and several-hundred-qubit machines are around the corner. Machines of this scale have the capacity to demonstrate quantum supremacy, the tipping point where QC is faster than the fastest classical alternative for a particular problem. Because error correction techniques will be central to QC and will be the most expensive component of quantum computation, choosing the lowest-overhead error correction scheme is critical to overall QC success. This paper evaluates two established quantum error correction codes---planar and double-defect surface codes---using a set of compilation, scheduling and network simulation tools. In considering scalable methods for optimizing both codes, we do so in the context of a full microarchitectural and compiler analysis. Contrary to previous predictions, we find that the simpler planar codes are sometimes more favorable for implementation on superconducting quantum computers, especially under conditions of high communication congestion.Comment: 14 pages, 9 figures, The 50th Annual IEEE/ACM International Symposium on Microarchitectur

    Towards Quantum Belief Propagation for LDPC Decoding in Wireless Networks

    Full text link
    We present Quantum Belief Propagation (QBP), a Quantum Annealing (QA) based decoder design for Low Density Parity Check (LDPC) error control codes, which have found many useful applications in Wi-Fi, satellite communications, mobile cellular systems, and data storage systems. QBP reduces the LDPC decoding to a discrete optimization problem, then embeds that reduced design onto quantum annealing hardware. QBP's embedding design can support LDPC codes of block length up to 420 bits on real state-of-the-art QA hardware with 2,048 qubits. We evaluate performance on real quantum annealer hardware, performing sensitivity analyses on a variety of parameter settings. Our design achieves a bit error rate of 10810^{-8} in 20 μ\mus and a 1,500 byte frame error rate of 10610^{-6} in 50 μ\mus at SNR 9 dB over a Gaussian noise wireless channel. Further experiments measure performance over real-world wireless channels, requiring 30 μ\mus to achieve a 1,500 byte 99.99%\% frame delivery rate at SNR 15-20 dB. QBP achieves a performance improvement over an FPGA based soft belief propagation LDPC decoder, by reaching a bit error rate of 10810^{-8} and a frame error rate of 10610^{-6} at an SNR 2.5--3.5 dB lower. In terms of limitations, QBP currently cannot realize practical protocol-sized (e.g.,\textit{e.g.,} Wi-Fi, WiMax) LDPC codes on current QA processors. Our further studies in this work present future cost, throughput, and QA hardware trend considerations

    Computational linear algebra over finite fields

    Get PDF
    We present here algorithms for efficient computation of linear algebra problems over finite fields

    Error-correcting codes and applications to large scale classification systems

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Includes bibliographical references (p. 37-39).In this thesis, we study the performance of distributed output coding (DOC) and error-Correcting output coding (ECOC) as potential methods for expanding the class of tractable machine-learning problems. Using distributed output coding, we were able to scale a neural-network-based algorithm to handle nearly 10,000 output classes. In particular, we built a prototype OCR engine for Devanagari and Korean texts based upon distributed output coding. We found that the resulting classifiers performed better than existing algorithms, while maintaining small size. Error-correction, however, was found to be ineffective at increasing the accuracy of the ensemble. For each language, we also tested the feasibility of automatically finding a good codebook. Unfortunately, the results in this direction were primarily negative.by Jeremy Scott Hurwitz.M.Eng
    corecore