909 research outputs found
Formal Computational Unlinkability Proofs of RFID Protocols
We set up a framework for the formal proofs of RFID protocols in the
computational model. We rely on the so-called computationally complete symbolic
attacker model. Our contributions are: i) To design (and prove sound) axioms
reflecting the properties of hash functions (Collision-Resistance, PRF); ii) To
formalize computational unlinkability in the model; iii) To illustrate the
method, providing the first formal proofs of unlinkability of RFID protocols,
in the computational model
Quantum entropic security and approximate quantum encryption
We present full generalisations of entropic security and entropic
indistinguishability to the quantum world where no assumption but a limit on
the knowledge of the adversary is made. This limit is quantified using the
quantum conditional min-entropy as introduced by Renato Renner. A proof of the
equivalence between the two security definitions is presented. We also provide
proofs of security for two different cyphers in this model and a proof for a
lower bound on the key length required by any such cypher. These cyphers
generalise existing schemes for approximate quantum encryption to the entropic
security model.Comment: Corrected mistakes in the proofs of Theorems 3 and 6; results
unchanged. To appear in IEEE Transactions on Information Theory
Chasing diagrams in cryptography
Cryptography is a theory of secret functions. Category theory is a general
theory of functions. Cryptography has reached a stage where its structures
often take several pages to define, and its formulas sometimes run from page to
page. Category theory has some complicated definitions as well, but one of its
specialties is taming the flood of structure. Cryptography seems to be in need
of high level methods, whereas category theory always needs concrete
applications. So why is there no categorical cryptography? One reason may be
that the foundations of modern cryptography are built from probabilistic
polynomial-time Turing machines, and category theory does not have a good
handle on such things. On the other hand, such foundational problems might be
the very reason why cryptographic constructions often resemble low level
machine programming. I present some preliminary explorations towards
categorical cryptography. It turns out that some of the main security concepts
are easily characterized through the categorical technique of *diagram
chasing*, which was first used Lambek's seminal `Lecture Notes on Rings and
Modules'.Comment: 17 pages, 4 figures; to appear in: 'Categories in Logic, Language and
Physics. Festschrift on the occasion of Jim Lambek's 90th birthday', Claudia
Casadio, Bob Coecke, Michael Moortgat, and Philip Scott (editors); this
version: fixed typos found by kind reader
Formal definition of probability on finite and discrete sample space for proving security of cryptographic systems using Mizar
Security proofs for cryptographic systems are very important. The ultimate objective of our study is to prove the security of cryptographic systems using the Mizar proof checker. In this study, we formalize the probability on a finite and discrete sample space to achieve our aim. Therefore, we introduce a formalization of the probability distribution and prove the correctness of the formalization using the Mizar proof checking system as a formal verification tool.ArticleArtificial Intelligence Research. 2(4):37-48 (2013)journal articl
A Survey of Symbolic Methods in Computational Analysis of Cryptographic Systems
Since the 1980s, two approaches have been developed for analyzing security protocols. One of the approaches relies on a computational model that considers issues of complexity and probability. This approach captures a strong notion of security, guaranteed against all probabilistic polynomial-time attacks. The other approach relies on a symbolic model of protocol executions in which cryptographic primitives are treated as black boxes. Since the seminal work of Dolev and Yao, it has been realized that this latter approach enables significantly simpler and often automated proofs. However, the guarantees that it offers have been quite unclear. For more than twenty years the two approaches have coexisted but evolved mostly independently. Recently, significant research efforts attempt to develop paradigms for cryptographic systems analysis that combines the best of both worlds. There are two broad directions that have been followed. {\em Computational soundness} aims to establish sufficient conditions under which results obtained using symbolic models imply security under computational models. The {\em direct approach} aims to apply the principles and the techniques developed in the context of symbolic models directly to computational ones. In this paper we survey existing results along both of these directions. Our goal is to provide a rather complete summary that could act as a quick reference for researchers who want to contribute to the field, want to make use of existing results, or just want to get a better picture of what results already exist
Computational Soundness for Dalvik Bytecode
Automatically analyzing information flow within Android applications that
rely on cryptographic operations with their computational security guarantees
imposes formidable challenges that existing approaches for understanding an
app's behavior struggle to meet. These approaches do not distinguish
cryptographic and non-cryptographic operations, and hence do not account for
cryptographic protections: f(m) is considered sensitive for a sensitive message
m irrespective of potential secrecy properties offered by a cryptographic
operation f. These approaches consequently provide a safe approximation of the
app's behavior, but they mistakenly classify a large fraction of apps as
potentially insecure and consequently yield overly pessimistic results.
In this paper, we show how cryptographic operations can be faithfully
included into existing approaches for automated app analysis. To this end, we
first show how cryptographic operations can be expressed as symbolic
abstractions within the comprehensive Dalvik bytecode language. These
abstractions are accessible to automated analysis, and they can be conveniently
added to existing app analysis tools using minor changes in their semantics.
Second, we show that our abstractions are faithful by providing the first
computational soundness result for Dalvik bytecode, i.e., the absence of
attacks against our symbolically abstracted program entails the absence of any
attacks against a suitable cryptographic program realization. We cast our
computational soundness result in the CoSP framework, which makes the result
modular and composable.Comment: Technical report for the ACM CCS 2016 conference pape
Quantum Fully Homomorphic Encryption With Verification
Fully-homomorphic encryption (FHE) enables computation on encrypted data
while maintaining secrecy. Recent research has shown that such schemes exist
even for quantum computation. Given the numerous applications of classical FHE
(zero-knowledge proofs, secure two-party computation, obfuscation, etc.) it is
reasonable to hope that quantum FHE (or QFHE) will lead to many new results in
the quantum setting. However, a crucial ingredient in almost all applications
of FHE is circuit verification. Classically, verification is performed by
checking a transcript of the homomorphic computation. Quantumly, this strategy
is impossible due to no-cloning. This leads to an important open question: can
quantum computations be delegated and verified in a non-interactive manner? In
this work, we answer this question in the affirmative, by constructing a scheme
for QFHE with verification (vQFHE). Our scheme provides authenticated
encryption, and enables arbitrary polynomial-time quantum computations without
the need of interaction between client and server. Verification is almost
entirely classical; for computations that start and end with classical states,
it is completely classical. As a first application, we show how to construct
quantum one-time programs from classical one-time programs and vQFHE.Comment: 30 page
Characterizing notions of omniprediction via multicalibration
A recent line of work shows that notions of multigroup fairness imply
surprisingly strong notions of omniprediction: loss minimization guarantees
that apply not just for a specific loss function, but for any loss belonging to
a large family of losses. While prior work has derived various notions of
omniprediction from multigroup fairness guarantees of varying strength, it was
unknown whether the connection goes in both directions.
In this work, we answer this question in the affirmative, establishing
equivalences between notions of multicalibration and omniprediction. The new
definitions that hold the key to this equivalence are new notions of swap
omniprediction, which are inspired by swap regret in online learning. We show
that these can be characterized exactly by a strengthening of multicalibration
that we refer to as swap multicalibration. One can go from standard to swap
multicalibration by a simple discretization; moreover all known algorithms for
standard multicalibration in fact give swap multicalibration. In the context of
omniprediction though, introducing the notion of swapping results in provably
stronger notions, which require a predictor to minimize expected loss at least
as well as an adaptive adversary who can choose both the loss function and
hypothesis based on the value predicted by the predictor.
Building on these characterizations, we paint a complete picture of the
relationship between the various omniprediction notions in the literature by
establishing implications and separations between them. Our work deepens our
understanding of the connections between multigroup fairness, loss minimization
and outcome indistinguishability and establishes new connections to classic
notions in online learning
- …