66 research outputs found

    Possibilities for improving the efficiency of linear error-correcting codes

    Get PDF
    Results are presented in the form of 14 theorems specifying sufficient conditions under which it is possible to construct new more efficient single- and multi error correcting codes from existing one when the method used to construct existing codes is known

    Sequences of binary irreducible polynomials

    Full text link
    In this paper we construct an infinite sequence of binary irreducible polynomials starting from any irreducible polynomial f_0 \in \F_2 [x]. If f0f_0 is of degree n=2lmn = 2^l \cdot m, where mm is odd and ll is a non-negative integer, after an initial finite sequence of polynomials f0,f1,...,fsf_0, f_1, ..., f_{s} with sl+3s \leq l+3, the degree of fi+1f_{i+1} is twice the degree of fif_i for any isi \geq s.Comment: 7 pages, minor adjustment

    Multiple Particle Interference and Quantum Error Correction

    Full text link
    The concept of multiple particle interference is discussed, using insights provided by the classical theory of error correcting codes. This leads to a discussion of error correction in a quantum communication channel or a quantum computer. Methods of error correction in the quantum regime are presented, and their limitations assessed. A quantum channel can recover from arbitrary decoherence of x qubits if K bits of quantum information are encoded using n quantum bits, where K/n can be greater than 1-2 H(2x/n), but must be less than 1 - 2 H(x/n). This implies exponential reduction of decoherence with only a polynomial increase in the computing resources required. Therefore quantum computation can be made free of errors in the presence of physically realistic levels of decoherence. The methods also allow isolation of quantum communication from noise and evesdropping (quantum privacy amplification).Comment: Submitted to Proc. Roy. Soc. Lond. A. in November 1995, accepted May 1996. 39 pages, 6 figures. This is now the final version. The changes are some added references, changed final figure, and a more precise use of the word `decoherence'. I would like to propose the word `defection' for a general unknown error of a single qubit (rotation and/or entanglement). It is useful because it captures the nature of the error process, and has a verb form `to defect'. Random unitary changes (rotations) of a qubit are caused by defects in the quantum computer; to entangle randomly with the environment is to form a treacherous alliance with an enemy of successful quantu

    Accuracy thresholds of topological color codes on the hexagonal and square-octagonal lattices

    Full text link
    Accuracy thresholds of quantum error correcting codes, which exploit topological properties of systems, defined on two different arrangements of qubits are predicted. We study the topological color codes on the hexagonal lattice and on the square-octagonal lattice by the use of mapping into the spin glass systems. The analysis for the corresponding spin glass systems consists of the duality, and the gauge symmetry, which has succeeded in deriving locations of special points, which are deeply related with the accuracy thresholds of topological error correcting codes. We predict that the accuracy thresholds for the topological color codes would be 1pc=0.109681-p_c = 0.1096-8 for the hexagonal lattice and 1pc=0.109231-p_c = 0.1092-3 for the square-octagonal lattice, where 1p1-p denotes the error probability on each qubit. Hence both of them are expected to be slightly lower than the probability 1pc=0.1100281-p_c = 0.110028 for the quantum Gilbert-Varshamov bound with a zero encoding rate.Comment: 6 pages, 4 figures, the previous title was "Threshold of topological color code". This is the published version in Phys. Rev.

    Partial tests, universal tests and decomposability

    Get PDF
    For a property P and a sub-property P', we say that P is P'-partially testable with q queries} if there exists an algorithm that distinguishes, with high probability, inputs in P' from inputs ε-far from P, using q queries. Some natural properties require many queries to test, but can be partitioned into a small number of subsets for which they are partially testable with very few queries, sometimes even a number independent of the input size. For properties over {0,1}, the notion of being thus partitionable ties in closely with Merlin-Arthur proofs of Proximity (MAPs) as defined independently in [14] a partition into r partially-testable properties is the same as a Merlin-Arthur system where the proof consists of the identity of one of the r partially-testable properties, giving a 2-way translation to an O(log r) size proof. Our main result is that for some low complexity properties a partition as above cannot exist, and moreover that for each of our properties there does not exist even a single sub-property featuring both a large size and a query-efficient partial test, in particular improving the lower bound set in [14]. For this we use neither the traditional Yao-type arguments nor the more recent communication complexity method, but open up a new approach for proving lower bounds. First, we use entropy analysis, which allows us to apply our arguments directly to 2-sided tests, thus avoiding the cost of the conversion in [14] from 2-sided to 1-sided tests. Broadly speaking we use "distinguishing instances" of a supposed test to show that a uniformly random choice of a member of the sub-property has "low entropy areas", ultimately leading to it having a low total entropy and hence having a small base set. Additionally, to have our arguments apply to adaptive tests, we use a mechanism of "rearranging" the input bits (through a decision tree that adaptively reads the entire input) to expose the low entropy that would otherwise not be apparent. We also explore the possibility of a connection in the other direction, namely whether the existence of a good partition (or MAP) can lead to a relatively query-efficient standard property test. We provide some preliminary results concerning this question, including a simple lower bound on the possible trade-off. Our second major result is a positive trade-off result for the restricted framework of 1-sided proximity oblivious tests. This is achieved through the construction of a "universal tester" that works the same for all properties admitting the restricted test. Our tester is very related to the notion of sample-based testing (for a non-constant number of queries) as defined by Goldreich and Ron in [13]. In particular it partially resolves an open problem raised by [13]

    Cyclic mutually unbiased bases, Fibonacci polynomials and Wiedemann's conjecture

    Full text link
    We relate the construction of a complete set of cyclic mutually unbiased bases, i. e., mutually unbiased bases generated by a single unitary operator, in power-of-two dimensions to the problem of finding a symmetric matrix over F_2 with an irreducible characteristic polynomial that has a given Fibonacci index. For dimensions of the form 2^(2^k) we present a solution that shows an analogy to an open conjecture of Wiedemann in finite field theory. Finally, we discuss the equivalence of mutually unbiased bases.Comment: 11 pages, added chapter on equivalenc

    Error Threshold for Color Codes and Random 3-Body Ising Models

    Get PDF
    We study the error threshold of color codes, a class of topological quantum codes that allow a direct implementation of quantum Clifford gates suitable for entanglement distillation, teleportation and fault-tolerant quantum computation. We map the error-correction process onto a statistical mechanical random 3-body Ising model and study its phase diagram via Monte Carlo simulations. The obtained error threshold of p_c = 0.109(2) is very close to that of Kitaev's toric code, showing that enhanced computational capabilities does not necessarily imply lower resistance to noise.Comment: 4 pages, 3 figures, 1 tabl

    Use of Linear Error-Correcting Subcodes in Flow Watermarking for Channels with Substitution and Deletion Errors

    Full text link
    International audienceAn invisible flow watermarking QIM scheme based on linear error-correcting subcodes for channels with substitution and deletion errors is proposed in this paper. The evaluation of scheme demonstrates similar to known scheme performance but with lower complexity as soon as its implementation is mainly based on linear decoding operations

    Spectrum of Sizes for Perfect Deletion-Correcting Codes

    Full text link
    One peculiarity with deletion-correcting codes is that perfect tt-deletion-correcting codes of the same length over the same alphabet can have different numbers of codewords, because the balls of radius tt with respect to the Levenshte\u{\i}n distance may be of different sizes. There is interest, therefore, in determining all possible sizes of a perfect tt-deletion-correcting code, given the length nn and the alphabet size~qq. In this paper, we determine completely the spectrum of possible sizes for perfect qq-ary 1-deletion-correcting codes of length three for all qq, and perfect qq-ary 2-deletion-correcting codes of length four for almost all qq, leaving only a small finite number of cases in doubt.Comment: 23 page

    Скінченні елементи біквадратичної інтерполяції: стандарти та альтернативи

    Get PDF
    Хомченко, А. Н. Скінченні елементи біквадратичної інтерполяції: стандарти та альтернативи = Final elements of bikvadratic interpolation: standards and alternatives / А. Н. Хомченко, А. В. Варшамов // Зб. наук. пр. НУК. – Миколаїв : НУК, 2020. – № 1 (479). – С. 97–102.Анотація. Робота присвячується скінченним елементам (СЕ) біквадратичної інтерполяції, які разом із трикутними СЕ вважаються найбільш популярними у прикладних задачах. Мова йде про ізопараметричні елементи, відомі як елементи серендипового класу. Головний недолік стандартних серендипових елементів, на думку прихильників механічних аналогій, полягає у фізичній неадекватності еквівалентних вузлових «навантажень» від рівномірної масової сили. Це явище «гравітаційного відштовхування» інколи називають парадоксом Зенкевича, який вперше у 1971 р. звернув увагу на протиприродний спектр еквівалентних вузлових сил і жорстко критикував цей феномен. Треба зазначити, що сам Зенкевич як співавтор відкриття стандартних серендипових СЕ вважав, що цей недолік усунути неможливо, і радив змиритися з ним. Ми спробуємо реабілітувати стандартні ізопараметричні СЕ шляхом іншої інтерпретації інтегральних характеристик функцій впливу вузлів СЕ. Крім того, ми вкажемо на причини виникнення від’ємних вузлових навантажень та запропонуємо спосіб конструювання альтернативних базисів, які вільні від цього недоліка. Цікаво, що конструктивна теорія серендипових апроксимацій дає змогу генерувати в необмеженій кількості математично обґрунтовані і фізично адекватні альтернативні моделі. Мета роботи – конструктивно довести факт існування альтенативних моделей серендипових скінченних елементів біквадратичної інтерполяції. Приклад скінченного елемента другого порядку Q8 ілюструє змогу обирати спектр еквівалентних вузлових навантажень на замовлення користувача. При цьому забезпечується математична обґрунтованість моделей (у рамках інтерполяційної гіпотези Лагранжа) та їх фізична адекватність. Методика побудови нових базисних функцій (функцій впливу) використовує нематричну процедуру статичної конденсації (редукції). На відміну від стандартної процедури («рецепт» Джордана, 1970) нова процедура генерує множину нових працездатних моделей скінченних елементів. Наявність невузлового параметру дає змогу керувати формоутворенням серендипових поверхонь. Оптимізація локальних та інтегральних характеристик моделі відбувається саме шляхом зміни рельєфу поверхні функції форми. В цьому полягає наукова новизна отриманих результатів. Новий підхід зберігає міжелементну неперервність. Це означає, що на практиці можна без небажаних наслідків ансамблювати стандартні та альтернативні моделі. Нові скінченні елементи суттєво поповнюють модельний ряд елементів бікваратичної інтерполяції. Практична значення полягає у змозі експериментувати з метою покращення інтерполяційних властивостей та обчислювальних якостей моделі. Дослідження важливе для оновлення модельного ряду скінченних елементів у пакетах прикладних програм.Abstract. The paper deals with finite element (CE) biquadratic interpolation, which together with triangular CE are considered to be the most popular in applied problems. These are isoparametric elements known as serendipity class elements. The main drawback of standard serendipity elements, according to proponents of mechanical analogies, is the physical inadequacy of equivalent nodal “loads” of uniform mass force. This phenomenon of “gravitational repulsion” is sometimes called the Zenkevic paradox, which for the first time in 1971 drew attention to the unnatural spectrum of equivalent nodal forces and severely criticized the phenomenon. It should be noted that Zenkevich, as a co-author of the discovery of standard serendipity CEs, believed that this deficiency could not be eliminated and advised her to put up with it. We will try to rehabilitate standard isoparametric CEs by another interpretation of the integral characteristics of the functions of the influence of the SE nodes. In addition, we will highlight the causes of negative nodal loads and provide a way to construct alternative bases that are free from this drawback. Interestingly, the constructive theory of serendipity approximations allows us to generate mathematically sound and physically adequate alternative models in an unlimited number. The purpose of this work is to constructively prove the existence of alternative models of serendipity finite elements of bicquadratic interpolation. The example of a second-order Q8 finite element illustrates the ability to select a range of equivalent node loads at the user’s request. The mathematical validity of the models (within the framework of the Lagrange interpolation hypothesis) and their physical adequacy are ensured. The technique of constructing new basis functions (influence functions) uses a non-matrix procedure of static condensation (reduction). Unlike the standard procedure (Jordan’s 1970 recipe), the new procedure generates many new workable finite element models. The presence of a non-nodal parameter makes it possible to control the shaping of serendipity surfaces. Optimization of local and integral characteristics of the model occurs precisely by changing the relief of the surface of the form function. This is the scientific novelty of the results. Note that the new approach preserves cross-element continuity. This means that in practice, standard and alternative models can be assembled without undesirable consequences. The new finite elements significantly add to the lineup of biquarian interpolation elements. The practical value is the ability to experiment to improve the interpolation properties and computational properties of the model. But it’s important to update the finite element lineup in application packages
    corecore