152 research outputs found
Reducing Time Complexity in RFID Systems
Radio frequency identification systems based on low-cost computing devices is the new plaything that every company would like to adopt. Its goal can be either to improve the productivity or to strengthen the security. Specific identification protocols based on symmetric challenge-response have been developed in order to assure the privacy of the device bearers. Although these protocols fit the devices' constraints, they always suffer from a large time complexity. Existing protocols require O(n) cryptographic operations to identify one device among n. Molnar and Wagner suggested a method to reduce this complexity to O(log n). We show that their technique could degrade the privacy if the attacker has the possibility to tamper with at least one device. Because low-cost devices are not tamper-resistant, such an attack could be feasible. We give a detailed analysis of their protocol and evaluate the threat. Next, we extend an approach based on time-memory trade-offs whose goal is to improve Ohkubo, Suzuki, and Kinoshita's protocol. We show that in practice this approach reaches the same performances as Molnar and Wagner's method, without degrading privacy. Radio frequency identification systems based on low-cost computing devices is the new plaything that every company would like to adopt. Its goal can be either to improve the productivity or to strengthen the security. Specific identification protocols based on symmetric challenge-response have been developed in order to assure the privacy of the device bearers. Although these protocols fit the devices' constraints, they always suffer from a large time complexity. Existing protocols require O(n) cryptographic operations to identify one device among n. Molnar and Wagner suggested a method to reduce this complexity to O(log n). We show that their technique could degrade the privacy if the attacker has the possibility to tamper with at least one device. Because low-cost devices are not tamper-resistant, such an attack could be feasible. We give a detailed analysis of their protocol and evaluate the threat. Next, we extend an approach based on time-memory trade-offs whose goal is to improve Ohkubo, Suzuki, and Kinoshita's protocol. We show that in practice this approach reaches the same performances as Molnar and Wagner's method, without degrading privacy
Semantically Secure Anonymity: Foundations of Re-encryption
The notion of universal re-encryption is an established primitive
used in the design of many anonymity protocols. It allows anyone
to randomize a ciphertext without changing its size, without first
decrypting it, and without knowing who the receiver is (i.e., not
knowing the public key used to create it).
By design it prevents the randomized ciphertext from being
correlated with the original ciphertext.
We revisit and analyze the security
foundation of universal re-encryption and show a subtlety in it,
namely, that it does not require that the encryption function
achieve key anonymity. Recall that the encryption function is
different from the re-encryption function.
We demonstrate this subtlety by constructing a cryptosystem that satisfies the
established definition of a universal cryptosystem but that has an encryption
function that does not achieve key anonymity, thereby instantiating the gap in
the definition of security of universal re-encryption. We note that the
gap in the definition carries over to a set of applications
that rely on universal re-encryption, applications in the original
paper on universal re-encryption and also follow-on work.
This shows that the original definition needs to be corrected
and it shows that it had a knock-on
effect that negatively impacted security in later work.
We then introduce a new definition that includes
the properties that are needed for a re-encryption cryptosystem to achieve
key anonymity in both the encryption function and the re-encryption
function, building on Goldwasser and Micali\u27s semantic security and
the original key anonymity notion of Bellare, Boldyreva, Desai, and Pointcheval.
Omitting any of the properties in our definition leads to a problem.
We also introduce a new generalization of the Decision
Diffie-Hellman (DDH) random self-reduction and use it, in turn, to prove
that the original ElGamal-based universal cryptosystem of Golle et al
is secure under our revised security definition.
We apply our new DDH reduction
technique to give the first proof in the standard model that ElGamal-based
incomparable public keys achieve key anonymity under DDH.
We present a novel secure Forward-Anonymous Batch Mix
as a new application
Strengthening Access Control Encryption
Access control encryption (ACE) was proposed by Damgård et al. to enable the control of information flow between several parties according to a given policy specifying which parties are, or are not, allowed to communicate. By involving a special party, called the sanitizer, policy-compliant communication is enabled while policy-violating communication is prevented, even if sender and receiver are dishonest. To allow outsourcing of the sanitizer, the secrecy of the message contents and the anonymity of the involved communication partners is guaranteed.
This paper shows that in order to be resilient against realistic attacks, the security definition of ACE must be considerably strengthened in several ways. A new, substantially stronger security definition is proposed, and an ACE scheme is constructed which provably satisfies the strong definition under standard assumptions.
Three aspects in which the security of ACE is strengthened are as follows. First, CCA security (rather than only CPA security) is guaranteed, which is important since senders can be dishonest in the considered setting. Second, the revealing of an (unsanitized) ciphertext (e.g., by a faulty sanitizer) cannot be exploited to communicate more in a policy-violating manner than the information contained in the ciphertext. We illustrate that this is not only a definitional subtlety by showing how in known ACE schemes, a single leaked unsanitized ciphertext allows for an arbitrary amount of policy-violating communication. Third, it is enforced that parties specified to receive a message according to the policy cannot be excluded from receiving it, even by a dishonest sender
The re-identification risk of Canadians from longitudinal demographics
<p>Abstract</p> <p>Background</p> <p>The public is less willing to allow their personal health information to be disclosed for research purposes if they do not trust researchers and how researchers manage their data. However, the public is more comfortable with their data being used for research if the risk of re-identification is low. There are few studies on the risk of re-identification of Canadians from their basic demographics, and no studies on their risk from their longitudinal data. Our objective was to estimate the risk of re-identification from the basic cross-sectional and longitudinal demographics of Canadians.</p> <p>Methods</p> <p>Uniqueness is a common measure of re-identification risk. Demographic data on a 25% random sample of the population of Montreal were analyzed to estimate population uniqueness on postal code, date of birth, and gender as well as their generalizations, for periods ranging from 1 year to 11 years.</p> <p>Results</p> <p>Almost 98% of the population was unique on full postal code, date of birth and gender: these three variables are effectively a unique identifier for Montrealers. Uniqueness increased for longitudinal data. Considerable generalization was required to reach acceptably low uniqueness levels, especially for longitudinal data. Detailed guidelines and disclosure policies on how to ensure that the re-identification risk is low are provided.</p> <p>Conclusions</p> <p>A large percentage of Montreal residents are unique on basic demographics. For non-longitudinal data sets, the three character postal code, gender, and month/year of birth represent sufficiently low re-identification risk. Data custodians need to generalize their demographic information further for longitudinal data sets.</p
Optimizing Mixing in Pervasive Networks: A Graph-Theoretic Perspective
One major concern in pervasive wireless applications is location privacy, where malicious eavesdroppers, based on static device identifiers, can continuously track users. As a commonly adopted countermeasure to prevent such identifier-based tracking, devices regularly and simultaneously change their identifiers in special areas called mix-zones. Although mix-zones provide spatio-temporal de-correlations between old and new identifiers, pseudonym changes, depending on the position of the mix-zone, can incur a substantial cost on the network due to lost communications and additional resources such as energy. In this paper, we address this trade-off by studying the problem of determining an optimal set of mix-zones such that the degree of mixing in the network is maximized, whereas the overall network-wide mixing cost is minimized. We follow a graph-theoretic approach and model the optimal mixing problem as a novel generalization of the vertex cover problem, called the Mix Cover (MC) problem. We propose three bounded-ratio approximation algorithms for the MC problem and validate them by an empirical evaluation of their performance on real data. The combinatorics-based approach followed here enables us to study the feasibility of determining optimal mix-zones regularly and under dynamic network conditions
Topology-Hiding Computation Beyond Semi-Honest Adversaries
Topology-hiding communication protocols allow a set of parties,
connected by an incomplete network with unknown communication graph,
where each party only knows its neighbors, to construct a complete
communication network such that the network topology remains hidden
even from a powerful adversary who can corrupt parties. This
communication network can then be used to perform arbitrary tasks, for
example secure multi-party computation, in a topology-hiding manner.
Previously proposed protocols could only tolerate passive
corruption. This paper proposes protocols that can also tolerate
fail-corruption (i.e., the adversary can crash any party at
any point in time) and so-called semi-malicious corruption (i.e., the
adversary can control a corrupted party\u27s randomness), without leaking
more than an arbitrarily small fraction of a bit of information about
the topology. A small-leakage protocol was recently proposed by Ball et al. [Eurocrypt\u2718], but only under the unrealistic set-up assumption that each party has a trusted hardware module containing secret correlated pre-set keys, and with the further two restrictions that only passively corrupted parties can be crashed by the adversary, and semi-malicious corruption is not tolerated. Since leaking a small
amount of information is unavoidable, as is the need to abort the
protocol in case of failures, our protocols seem to achieve the best
possible goal in a model with fail-corruption.
Further contributions of the paper are applications of the protocol to
obtain secure MPC protocols, which requires a way to bound the
aggregated leakage when multiple small-leakage protocols are
executed in parallel or sequentially. Moreover, while previous
protocols are based on the DDH assumption, a new so-called PKCR
public-key encryption scheme based on the LWE assumption is proposed,
allowing to base topology-hiding computation on LWE. Furthermore, a
protocol using fully-homomorphic encryption achieving very low round
complexity is proposed
Chosen-Ciphertext Secure Proxy Re-Encryption without Pairing
Office of Research, Singapore Management Universit
New Techniques in Replica Encodings with Client Setup
A proof of replication system is a cryptographic primitive that allows
a server (or group of servers) to prove to a client that it is
dedicated to storing multiple copies or replicas of a file. Until
recently, all such protocols required fined-grained timing assumptions
on the amount of time it takes for a server to produce such replicas.
Damgård, Ganesh, and Orlandi (CRYPTO\u27 19) proposed a novel notion
that we will call proof of replication with client setup. Here, a
client first operates with secret coins to generate the replicas for a
file. Such systems do not inherently have to require fine-grained
timing assumptions. At the core of their solution to building proofs
of replication with client setup is an abstraction called replica
encodings. Briefly, these comprise a private coin scheme where a
client algorithm given a file can produce an encoding
. The encodings have the property that, given any encoding
, one can decode and retrieve the original file . Secondly,
if a server has significantly less than bit of storage,
it cannot reproduce encodings. The authors give a construction of
encodings from ideal permutations and trapdoor functions.
In this work, we make three central contributions:
1) Our first contribution is that we discover and demonstrate that
the security argument put forth by DGO19 is fundamentally flawed.
Briefly, the security argument makes assumptions on the attacker\u27s storage
behavior that does not capture general attacker strategies. We demonstrate
this issue by constructing a trapdoor permutation which is secure assuming
indistinguishability obfuscation, serves as a counterexample to their claim
(for the parameterization stated).
2) In our second contribution we show that the DGO19 construction is actually secure in the ideal
permutation model from any trapdoor permutation
when parameterized correctly. In particular, when the number of rounds in the construction is equal to
where is the security parameter, is the number of replicas and
is the number of blocks. To do so we build up a proof approach from the
ground up that accounts for general attacker storage behavior where
we create an analysis technique that we call ``sequence-then-switch\u27\u27.
3) Finally, we show a new construction that is provably secure in the random oracle (or random
function) model. Thus requiring less structure on the ideal function
- …