1,798 research outputs found
Hamming Codes
We will be looking into the application of Matrix Algebra in forming Hamming Codes. Hamming Codes are essential not just in the detection of errors, but also in the linear concurrent correction of these errors. The matrices we will use, will have entries that are binary units. Binary units are mathematically convenient, and their simplicity permits the representation of many open and closed circuits used in communication systems. The entries in the matrices will represent a message that is meant for transmission or reception, akin to the contemporary application of Hamming Codes in wireless communication. We will use Hamming (7,4) Codes which are linear subspaces of the 7-dimensional vector space over F2, the base field
Graphical Structures for Design and Verification of Quantum Error Correction
We introduce a high-level graphical framework for designing and analysing
quantum error correcting codes, centred on what we term the coherent parity
check (CPC). The graphical formulation is based on the diagrammatic tools of
the zx-calculus of quantum observables. The resulting framework leads to a
construction for stabilizer codes that allows us to design and verify a broad
range of quantum codes based on classical ones, and that gives a means of
discovering large classes of codes using both analytical and numerical methods.
We focus in particular on the smaller codes that will be the first used by
near-term devices. We show how CSS codes form a subset of CPC codes and, more
generally, how to compute stabilizers for a CPC code. As an explicit example of
this framework, we give a method for turning almost any pair of classical
[n,k,3] codes into a [[2n - k + 2, k, 3]] CPC code. Further, we give a simple
technique for machine search which yields thousands of potential codes, and
demonstrate its operation for distance 3 and 5 codes. Finally, we use the
graphical tools to demonstrate how Clifford computation can be performed within
CPC codes. As our framework gives a new tool for constructing small- to
medium-sized codes with relatively high code rates, it provides a new source
for codes that could be suitable for emerging devices, while its zx-calculus
foundations enable natural integration of error correction with graphical
compiler toolchains. It also provides a powerful framework for reasoning about
all stabilizer quantum error correction codes of any size.Comment: Computer code associated with this paper may be found at
https://doi.org/10.15128/r1bn999672
Multiple Particle Interference and Quantum Error Correction
The concept of multiple particle interference is discussed, using insights
provided by the classical theory of error correcting codes. This leads to a
discussion of error correction in a quantum communication channel or a quantum
computer. Methods of error correction in the quantum regime are presented, and
their limitations assessed. A quantum channel can recover from arbitrary
decoherence of x qubits if K bits of quantum information are encoded using n
quantum bits, where K/n can be greater than 1-2 H(2x/n), but must be less than
1 - 2 H(x/n). This implies exponential reduction of decoherence with only a
polynomial increase in the computing resources required. Therefore quantum
computation can be made free of errors in the presence of physically realistic
levels of decoherence. The methods also allow isolation of quantum
communication from noise and evesdropping (quantum privacy amplification).Comment: Submitted to Proc. Roy. Soc. Lond. A. in November 1995, accepted May
1996. 39 pages, 6 figures. This is now the final version. The changes are
some added references, changed final figure, and a more precise use of the
word `decoherence'. I would like to propose the word `defection' for a
general unknown error of a single qubit (rotation and/or entanglement). It is
useful because it captures the nature of the error process, and has a verb
form `to defect'. Random unitary changes (rotations) of a qubit are caused by
defects in the quantum computer; to entangle randomly with the environment is
to form a treacherous alliance with an enemy of successful quantu
On the Duality of Probing and Fault Attacks
In this work we investigate the problem of simultaneous privacy and integrity
protection in cryptographic circuits. We consider a white-box scenario with a
powerful, yet limited attacker. A concise metric for the level of probing and
fault security is introduced, which is directly related to the capabilities of
a realistic attacker. In order to investigate the interrelation of probing and
fault security we introduce a common mathematical framework based on the
formalism of information and coding theory. The framework unifies the known
linear masking schemes. We proof a central theorem about the properties of
linear codes which leads to optimal secret sharing schemes. These schemes
provide the lower bound for the number of masks needed to counteract an
attacker with a given strength. The new formalism reveals an intriguing duality
principle between the problems of probing and fault security, and provides a
unified view on privacy and integrity protection using error detecting codes.
Finally, we introduce a new class of linear tamper-resistant codes. These are
eligible to preserve security against an attacker mounting simultaneous probing
and fault attacks
All-optical logic circuits based on polarization properties of nondegenerate four-wave mixing
All-optical logic circuits based on the polarization properties of nondegenerate four-wave mixing are proposed. Schemes to perform multiple triple-product logic functions are discussed, and it is shown that higher-level Boolean operations that involve several bits can be implemented without resorting to the standard two-input gates. As a simple illustration of the idea, a circuit that performs error correction on a (3, 1) Hamming code is demonstrated. Error-free performance (bit error rate of <10^(−9)) at 2.5 Gbit/s is achieved after single-error correction on the Hamming word with 50% errors
The problem with the SURF scheme
There is a serious problem with one of the assumptions made in the security
proof of the SURF scheme. This problem turns out to be easy in the regime of
parameters needed for the SURF scheme to work.
We give afterwards the old version of the paper for the reader's convenience.Comment: Warning : we found a serious problem in the security proof of the
SURF scheme. We explain this problem here and give the old version of the
paper afterward
Design of a fault tolerant airborne digital computer. Volume 1: Architecture
This volume is concerned with the architecture of a fault tolerant digital computer for an advanced commercial aircraft. All of the computations of the aircraft, including those presently carried out by analogue techniques, are to be carried out in this digital computer. Among the important qualities of the computer are the following: (1) The capacity is to be matched to the aircraft environment. (2) The reliability is to be selectively matched to the criticality and deadline requirements of each of the computations. (3) The system is to be readily expandable. contractible, and (4) The design is to appropriate to post 1975 technology. Three candidate architectures are discussed and assessed in terms of the above qualities. Of the three candidates, a newly conceived architecture, Software Implemented Fault Tolerance (SIFT), provides the best match to the above qualities. In addition SIFT is particularly simple and believable. The other candidates, Bus Checker System (BUCS), also newly conceived in this project, and the Hopkins multiprocessor are potentially more efficient than SIFT in the use of redundancy, but otherwise are not as attractive
Fault-tolerant computer study
A set of building block circuits is described which can be used with commercially available microprocessors and memories to implement fault tolerant distributed computer systems. Each building block circuit is intended for VLSI implementation as a single chip. Several building blocks and associated processor and memory chips form a self checking computer module with self contained input output and interfaces to redundant communications buses. Fault tolerance is achieved by connecting self checking computer modules into a redundant network in which backup buses and computer modules are provided to circumvent failures. The requirements and design methodology which led to the definition of the building block circuits are discussed
The reliability of single-error protected computer memories
The lifetimes of computer memories which are protected with single-error-correcting-double-error-detecting (SEC-DED) codes are studies. The authors assume that there are five possible types of memory chip failure (single-cell, row, column, row-column and whole chip), and, after making a simplifying assumption (the Poisson assumption), have substantiated that experimentally. A simple closed-form expression is derived for the system reliability function. Using this formula and chip reliability data taken from published tables, it is possible to compute the mean time to failure for realistic memory systems
Command system study for the operation and control of unmanned scientific satellites. task ii closed-loop /feedback/ verification techniques second quarterly progress report, 30 sep. - 31 dec. 1964
Closed loop, feedback verification techniques for command system of unmanned scientific satellit
- …