265 research outputs found
Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data
We provide formal definitions and efficient secure techniques for
- turning noisy information into keys usable for any cryptographic
application, and, in particular,
- reliably and securely authenticating biometric data.
Our techniques apply not just to biometric information, but to any keying
material that, unlike traditional cryptographic keys, is (1) not reproducible
precisely and (2) not distributed uniformly. We propose two primitives: a
"fuzzy extractor" reliably extracts nearly uniform randomness R from its input;
the extraction is error-tolerant in the sense that R will be the same even if
the input changes, as long as it remains reasonably close to the original.
Thus, R can be used as a key in a cryptographic application. A "secure sketch"
produces public information about its input w that does not reveal w, and yet
allows exact recovery of w given another value that is close to w. Thus, it can
be used to reliably reproduce error-prone biometric inputs without incurring
the security risk inherent in storing them.
We define the primitives to be both formally secure and versatile,
generalizing much prior work. In addition, we provide nearly optimal
constructions of both primitives for various measures of ``closeness'' of input
data, such as Hamming distance, edit distance, and set difference.Comment: 47 pp., 3 figures. Prelim. version in Eurocrypt 2004, Springer LNCS
3027, pp. 523-540. Differences from version 3: minor edits for grammar,
clarity, and typo
The Road From Classical to Quantum Codes: A Hashing Bound Approaching Design Procedure
Powerful Quantum Error Correction Codes (QECCs) are required for stabilizing
and protecting fragile qubits against the undesirable effects of quantum
decoherence. Similar to classical codes, hashing bound approaching QECCs may be
designed by exploiting a concatenated code structure, which invokes iterative
decoding. Therefore, in this paper we provide an extensive step-by-step
tutorial for designing EXtrinsic Information Transfer (EXIT) chart aided
concatenated quantum codes based on the underlying quantum-to-classical
isomorphism. These design lessons are then exemplified in the context of our
proposed Quantum Irregular Convolutional Code (QIRCC), which constitutes the
outer component of a concatenated quantum code. The proposed QIRCC can be
dynamically adapted to match any given inner code using EXIT charts, hence
achieving a performance close to the hashing bound. It is demonstrated that our
QIRCC-based optimized design is capable of operating within 0.4 dB of the noise
limit
Systematic DFT Frames: Principle, Eigenvalues Structure, and Applications
Motivated by a host of recent applications requiring some amount of
redundancy, frames are becoming a standard tool in the signal processing
toolbox. In this paper, we study a specific class of frames, known as discrete
Fourier transform (DFT) codes, and introduce the notion of systematic frames
for this class. This is encouraged by a new application of frames, namely,
distributed source coding that uses DFT codes for compression. Studying their
extreme eigenvalues, we show that, unlike DFT frames, systematic DFT frames are
not necessarily tight. Then, we come up with conditions for which these frames
can be tight. In either case, the best and worst systematic frames are
established in the minimum mean-squared reconstruction error sense. Eigenvalues
of DFT frames and their subframes play a pivotal role in this work.
Particularly, we derive some bounds on the extreme eigenvalues DFT subframes
which are used to prove most of the results; these bounds are valuable
independently
Succinct Representation of Codes with Applications to Testing
Motivated by questions in property testing, we search for linear
error-correcting codes that have the "single local orbit" property: i.e., they
are specified by a single local constraint and its translations under the
symmetry group of the code. We show that the dual of every "sparse" binary code
whose coordinates are indexed by elements of F_{2^n} for prime n, and whose
symmetry group includes the group of non-singular affine transformations of
F_{2^n} has the single local orbit property. (A code is said to be "sparse" if
it contains polynomially many codewords in its block length.) In particular
this class includes the dual-BCH codes for whose duals (i.e., for BCH codes)
simple bases were not known. Our result gives the first short (O(n)-bit, as
opposed to the natural exp(n)-bit) description of a low-weight basis for BCH
codes. The interest in the "single local orbit" property comes from the recent
result of Kaufman and Sudan (STOC 2008) that shows that the duals of codes that
have the single local orbit property under the affine symmetry group are
locally testable. When combined with our main result, this shows that all
sparse affine-invariant codes over the coordinates F_{2^n} for prime n are
locally testable. If, in addition to n being prime, if 2^n-1 is also prime
(i.e., 2^n-1 is a Mersenne prime), then we get that every sparse cyclic code
also has the single local orbit. In particular this implies that BCH codes of
Mersenne prime length are generated by a single low-weight codeword and its
cyclic shifts
Algebraic Codes For Error Correction In Digital Communication Systems
Access to the full-text thesis is no longer available at the author's request, due to 3rd party copyright restrictions. Access removed on 29.11.2016 by CS (TIS).Metadata merged with duplicate record (http://hdl.handle.net/10026.1/899) on 20.12.2016 by CS (TIS).C. Shannon presented theoretical conditions under which communication was possible
error-free in the presence of noise. Subsequently the notion of using error
correcting codes to mitigate the effects of noise in digital transmission was introduced
by R. Hamming. Algebraic codes, codes described using powerful tools from
algebra took to the fore early on in the search for good error correcting codes. Many
classes of algebraic codes now exist and are known to have the best properties of
any known classes of codes. An error correcting code can be described by three of its
most important properties length, dimension and minimum distance. Given codes
with the same length and dimension, one with the largest minimum distance will
provide better error correction. As a result the research focuses on finding improved
codes with better minimum distances than any known codes.
Algebraic geometry codes are obtained from curves. They are a culmination of years
of research into algebraic codes and generalise most known algebraic codes. Additionally
they have exceptional distance properties as their lengths become arbitrarily
large. Algebraic geometry codes are studied in great detail with special attention
given to their construction and decoding. The practical performance of these codes
is evaluated and compared with previously known codes in different communication
channels. Furthermore many new codes that have better minimum distance
to the best known codes with the same length and dimension are presented from
a generalised construction of algebraic geometry codes. Goppa codes are also an
important class of algebraic codes. A construction of binary extended Goppa codes
is generalised to codes with nonbinary alphabets and as a result many new codes
are found. This construction is shown as an efficient way to extend another well
known class of algebraic codes, BCH codes. A generic method of shortening codes
whilst increasing the minimum distance is generalised. An analysis of this method
reveals a close relationship with methods of extending codes. Some new codes from
Goppa codes are found by exploiting this relationship. Finally an extension method
for BCH codes is presented and this method is shown be as good as a well known
method of extension in certain cases
Optical Time-Frequency Packing: Principles, Design, Implementation, and Experimental Demonstration
Time-frequency packing (TFP) transmission provides the highest achievable
spectral efficiency with a constrained symbol alphabet and detector complexity.
In this work, the application of the TFP technique to fiber-optic systems is
investigated and experimentally demonstrated. The main theoretical aspects,
design guidelines, and implementation issues are discussed, focusing on those
aspects which are peculiar to TFP systems. In particular, adaptive compensation
of propagation impairments, matched filtering, and maximum a posteriori
probability detection are obtained by a combination of a butterfly equalizer
and four 8-state parallel Bahl-Cocke-Jelinek-Raviv (BCJR) detectors. A novel
algorithm that ensures adaptive equalization, channel estimation, and a proper
distribution of tasks between the equalizer and BCJR detectors is proposed. A
set of irregular low-density parity-check codes with different rates is
designed to operate at low error rates and approach the spectral efficiency
limit achievable by TFP at different signal-to-noise ratios. An experimental
demonstration of the designed system is finally provided with five
dual-polarization QPSK-modulated optical carriers, densely packed in a 100 GHz
bandwidth, employing a recirculating loop to test the performance of the system
at different transmission distances.Comment: This paper has been accepted for publication in the IEEE/OSA Journal
of Lightwave Technolog
- …