968 research outputs found
Strong key derivation from noisy sources
A shared cryptographic key enables strong authentication. Candidate sources for creating such a shared key include biometrics and physically unclonable functions. However, these sources come with a substantial problem: noise in repeated readings.
A fuzzy extractor produces a stable key from a noisy source. It consists of two stages. At enrollment time, the generate algorithm produces a key from an initial reading of the source. At authentication time, the reproduce algorithm takes a repeated but noisy reading of the source, yielding the same key when the two readings are close. For many sources of practical importance, traditional fuzzy extractors provide no meaningful security guarantee.
This dissertation improves key derivation from noisy sources. These improvements stem from three observations about traditional fuzzy extractors.
First, the only property of a source that standard fuzzy extractors use is the entropy in the original reading. We observe that additional structural information about the source can facilitate key derivation.
Second, most fuzzy extractors work by first recovering the initial reading from the noisy reading (known as a secure sketch). This approach imposes harsh limitations on the length of the derived key. We observe that it is possible to produce a consistent key without recovering the original reading of the source.
Third, traditional fuzzy extractors provide information-theoretic security. However, security against computationally bounded adversaries is sufficient. We observe fuzzy extractors providing computational security can overcome limitations of traditional approaches.
The above observations are supported by negative results and constructions. As an example, we combine all three observations to construct a fuzzy extractor achieving properties that have eluded prior approaches. The construction remains secure even when the initial enrollment phase is repeated multiple times with noisy readings. Furthermore, for many practical sources, reliability demands that the tolerated noise is larger than the entropy of the original reading. The construction provides security for sources of this type by utilizing additional source structure, producing a consistent key without recovering the original reading, and providing computational security
Code Offset in the Exponent
Fuzzy extractors derive stable keys from noisy sources. They are a fundamental tool for key derivation from biometric sources. This work introduces a new construction, code offset in the exponent. This construction is the first reusable fuzzy extractor that simultaneously supports structured, low entropy distributions with correlated symbols and confidence information. These properties are specifically motivated by the most pertinent applications - key derivation from biometrics and physical unclonable functions - which typically demonstrate low entropy with additional statistical correlations and benefit from extractors that can leverage confidence information for efficiency.
Code offset in the exponent is a group encoding of the code offset construction (Juels and Wattenberg, CCS 1999). A random codeword of a linear error-correcting code is used as a one-time pad for a sampled value from the noisy source. Rather than encoding this directly, code offset in the exponent encodes by exponentiation of a generator in a cryptographically strong group. We introduce and characterize a condition on noisy sources that directly translates to security of our construction in the generic group model. Our condition requires the inner product between the source distribution and all vectors in the null space of the code to be unpredictable
Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models
Authentication systems are vulnerable to model inversion attacks where an
adversary is able to approximate the inverse of a target machine learning
model. Biometric models are a prime candidate for this type of attack. This is
because inverting a biometric model allows the attacker to produce a realistic
biometric input to spoof biometric authentication systems.
One of the main constraints in conducting a successful model inversion attack
is the amount of training data required. In this work, we focus on iris and
facial biometric systems and propose a new technique that drastically reduces
the amount of training data necessary. By leveraging the output of multiple
models, we are able to conduct model inversion attacks with 1/10th the training
set size of Ahmad and Fuller (IJCB 2020) for iris data and 1/1000th the
training set size of Mai et al. (Pattern Analysis and Machine Intelligence
2019) for facial data. We denote our new attack technique as structured random
with alignment loss. Our attacks are black-box, requiring no knowledge of the
weights of the target neural network, only the dimension, and values of the
output vector.
To show the versatility of the alignment loss, we apply our attack framework
to the task of membership inference (Shokri et al., IEEE S&P 2017) on biometric
data. For the iris, membership inference attack against classification networks
improves from 52% to 62% accuracy.Comment: This is a major revision of a paper titled "Inverting Biometric
Models with Fewer Samples: Incorporating the Output of Multiple Models" by
the same authors that appears at IJCB 202
Flavor Changing Supersymmetry Interactions in a Supernova
We consider for the first time R-parity violating interactions of the Minimal
Standard Supersymmetric Model involving neutrinos and quarks (``flavor changing
neutral currents'', FCNC's) in the infall stage of stellar collapse. Our
considerations extend to other kinds of flavor changing neutrino reactions as
well. We examine non-forward neutrino scattering processes on heavy nuclei and
free nucleons in the supernova core. This investigation has led to four
principal original discoveries/products: (1) first calculation of neutrino
flavor changing cross sections for spin one half (e.g. free nucleon) and spin
zero nuclear targets; (2) discovery of nuclear mass number squared (A squared)
coherent amplification of neutrino-quark FCNC's; (3) analysis of FCNC-induced
alteration of electron capture and weak/nuclear equilibrium in the collapsing
core; and (4) generalization of the calculated cross sections (mentioned in 1)
for the case of hot heavy nuclei to be used in collapse/supernova and neutrino
transport simulations. The scattering processes that we consider allow electron
neutrinos to change flavor during core collapse, thereby opening holes in the
electron neutrino sea, which allows electron capture to proceed and results in
a lower core electron fraction. A lower electron fraction implies a lower
homologous core mass, a lower shock energy, and a greater nuclear
photo-disintegration burden for the shock. In addition, unlike the standard
supernova model, the core now could have net muon and/or tau lepton numbers.
These effects could be significant even for supersymmetric couplings below
current experimental bounds.Comment: 22 pages, 7 figures, typos corrected, abstract modifided, minor
additions to conten
Impossibility of Efficient Information-Theoretic Fuzzy Extraction
Fuzzy extractors convert noisy signals from the physical world into reliable cryptographic keys. Fuzzy min-entropy is an important measure of the ability of a fuzzy extractor to distill keys from a distribution: in particular, it bounds the length of the key that can be derived (Fuller, Reyzin, and Smith, IEEE Transactions on Information Theory 2020).
In general, fuzzy min-entropy that is superlogarithmic in the security parameter is required for a noisy distribution to be suitable for key derivation.
There is a wide gap between what is possible with respect to
computational and information-theoretic adversaries. Under the assumption of general-purpose obfuscation, keys can be securely derived from all distributions with superlogarithmic entropy. Against information-theoretic adversaries, however, it is impossible to build a single fuzzy extractor that works for all distributions (Fuller, Reyzin, and Smith, IEEE Transactions on Information Theory 2020).
A weaker information-theoretic goal is to build a fuzzy extractor for each particular probability distribution. This is the approach taken by Woodage et al. (Crypto 2017). Prior approaches use the full description of the probability mass function and are inefficient. We show this is inherent: for a quarter of distributions with fuzzy min-entropy and points there is no secure fuzzy extractor that uses less bits of information about the distribution.} This result rules out the possibility of efficient, information-theoretic fuzzy extractors for many distributions with fuzzy min-entropy.
We show an analogous result with stronger parameters for information-theoretic secure sketches. Secure sketches are frequently used to construct fuzzy extractors
- …