146 research outputs found
Simple Proofs of Space-Time and Rational Proofs of Storage
We introduce a new cryptographic primitive: Proofs of Space-Time (PoSTs) and construct an extremely simple, practical protocol for implementing these proofs. A PoST allows a prover to convince a verifier that she spent a ``space-time\u27\u27 resource (storing data---space---over a period of time).
Formally, we define the PoST resource as a trade-off between CPU work and space-time (under reasonable cost assumptions, a rational user will prefer to use the lower-cost space-time resource over CPU work).
Compared to a proof-of-work, a PoST requires less energy use, as the ``difficulty\u27\u27 can be increased by extending the time period over which data is stored without increasing computation costs.
Our definition is very similar to ``Proofs of Space\u27\u27 [ePrint 2013/796, 2013/805] but, unlike the previous definitions, takes into account amortization attacks and storage duration. Moreover, our protocol uses a very different (and much simpler) technique, making use of the fact that we explicitly allow a space-time tradeoff, and doesn\u27t require any non-standard assumptions (beyond random oracles). Unlike previous constructions, our protocol allows incremental difficulty adjustment, which can gracefully handle increases in the price of storage compared to CPU work. In addition, we show how, in a cryptocurrency context, the parameters of the scheme can be adjusted using a market-based mechanism, similar in spirit to the difficulty adjustment for PoW protocols
Semantically Secure Anonymity: Foundations of Re-encryption
The notion of universal re-encryption is an established primitive
used in the design of many anonymity protocols. It allows anyone
to randomize a ciphertext without changing its size, without first
decrypting it, and without knowing who the receiver is (i.e., not
knowing the public key used to create it).
By design it prevents the randomized ciphertext from being
correlated with the original ciphertext.
We revisit and analyze the security
foundation of universal re-encryption and show a subtlety in it,
namely, that it does not require that the encryption function
achieve key anonymity. Recall that the encryption function is
different from the re-encryption function.
We demonstrate this subtlety by constructing a cryptosystem that satisfies the
established definition of a universal cryptosystem but that has an encryption
function that does not achieve key anonymity, thereby instantiating the gap in
the definition of security of universal re-encryption. We note that the
gap in the definition carries over to a set of applications
that rely on universal re-encryption, applications in the original
paper on universal re-encryption and also follow-on work.
This shows that the original definition needs to be corrected
and it shows that it had a knock-on
effect that negatively impacted security in later work.
We then introduce a new definition that includes
the properties that are needed for a re-encryption cryptosystem to achieve
key anonymity in both the encryption function and the re-encryption
function, building on Goldwasser and Micali\u27s semantic security and
the original key anonymity notion of Bellare, Boldyreva, Desai, and Pointcheval.
Omitting any of the properties in our definition leads to a problem.
We also introduce a new generalization of the Decision
Diffie-Hellman (DDH) random self-reduction and use it, in turn, to prove
that the original ElGamal-based universal cryptosystem of Golle et al
is secure under our revised security definition.
We apply our new DDH reduction
technique to give the first proof in the standard model that ElGamal-based
incomparable public keys achieve key anonymity under DDH.
We present a novel secure Forward-Anonymous Batch Mix
as a new application
Strengthening Access Control Encryption
Access control encryption (ACE) was proposed by Damgård et al. to enable the control of information flow between several parties according to a given policy specifying which parties are, or are not, allowed to communicate. By involving a special party, called the sanitizer, policy-compliant communication is enabled while policy-violating communication is prevented, even if sender and receiver are dishonest. To allow outsourcing of the sanitizer, the secrecy of the message contents and the anonymity of the involved communication partners is guaranteed.
This paper shows that in order to be resilient against realistic attacks, the security definition of ACE must be considerably strengthened in several ways. A new, substantially stronger security definition is proposed, and an ACE scheme is constructed which provably satisfies the strong definition under standard assumptions.
Three aspects in which the security of ACE is strengthened are as follows. First, CCA security (rather than only CPA security) is guaranteed, which is important since senders can be dishonest in the considered setting. Second, the revealing of an (unsanitized) ciphertext (e.g., by a faulty sanitizer) cannot be exploited to communicate more in a policy-violating manner than the information contained in the ciphertext. We illustrate that this is not only a definitional subtlety by showing how in known ACE schemes, a single leaked unsanitized ciphertext allows for an arbitrary amount of policy-violating communication. Third, it is enforced that parties specified to receive a message according to the policy cannot be excluded from receiving it, even by a dishonest sender
Reducing Time Complexity in RFID Systems
Radio frequency identification systems based on low-cost computing devices is the new plaything that every company would like to adopt. Its goal can be either to improve the productivity or to strengthen the security. Specific identification protocols based on symmetric challenge-response have been developed in order to assure the privacy of the device bearers. Although these protocols fit the devices' constraints, they always suffer from a large time complexity. Existing protocols require O(n) cryptographic operations to identify one device among n. Molnar and Wagner suggested a method to reduce this complexity to O(log n). We show that their technique could degrade the privacy if the attacker has the possibility to tamper with at least one device. Because low-cost devices are not tamper-resistant, such an attack could be feasible. We give a detailed analysis of their protocol and evaluate the threat. Next, we extend an approach based on time-memory trade-offs whose goal is to improve Ohkubo, Suzuki, and Kinoshita's protocol. We show that in practice this approach reaches the same performances as Molnar and Wagner's method, without degrading privacy. Radio frequency identification systems based on low-cost computing devices is the new plaything that every company would like to adopt. Its goal can be either to improve the productivity or to strengthen the security. Specific identification protocols based on symmetric challenge-response have been developed in order to assure the privacy of the device bearers. Although these protocols fit the devices' constraints, they always suffer from a large time complexity. Existing protocols require O(n) cryptographic operations to identify one device among n. Molnar and Wagner suggested a method to reduce this complexity to O(log n). We show that their technique could degrade the privacy if the attacker has the possibility to tamper with at least one device. Because low-cost devices are not tamper-resistant, such an attack could be feasible. We give a detailed analysis of their protocol and evaluate the threat. Next, we extend an approach based on time-memory trade-offs whose goal is to improve Ohkubo, Suzuki, and Kinoshita's protocol. We show that in practice this approach reaches the same performances as Molnar and Wagner's method, without degrading privacy
New Techniques in Replica Encodings with Client Setup
A proof of replication system is a cryptographic primitive that allows
a server (or group of servers) to prove to a client that it is
dedicated to storing multiple copies or replicas of a file. Until
recently, all such protocols required fined-grained timing assumptions
on the amount of time it takes for a server to produce such replicas.
Damgård, Ganesh, and Orlandi (CRYPTO\u27 19) proposed a novel notion
that we will call proof of replication with client setup. Here, a
client first operates with secret coins to generate the replicas for a
file. Such systems do not inherently have to require fine-grained
timing assumptions. At the core of their solution to building proofs
of replication with client setup is an abstraction called replica
encodings. Briefly, these comprise a private coin scheme where a
client algorithm given a file can produce an encoding
. The encodings have the property that, given any encoding
, one can decode and retrieve the original file . Secondly,
if a server has significantly less than bit of storage,
it cannot reproduce encodings. The authors give a construction of
encodings from ideal permutations and trapdoor functions.
In this work, we make three central contributions:
1) Our first contribution is that we discover and demonstrate that
the security argument put forth by DGO19 is fundamentally flawed.
Briefly, the security argument makes assumptions on the attacker\u27s storage
behavior that does not capture general attacker strategies. We demonstrate
this issue by constructing a trapdoor permutation which is secure assuming
indistinguishability obfuscation, serves as a counterexample to their claim
(for the parameterization stated).
2) In our second contribution we show that the DGO19 construction is actually secure in the ideal
permutation model from any trapdoor permutation
when parameterized correctly. In particular, when the number of rounds in the construction is equal to
where is the security parameter, is the number of replicas and
is the number of blocks. To do so we build up a proof approach from the
ground up that accounts for general attacker storage behavior where
we create an analysis technique that we call ``sequence-then-switch\u27\u27.
3) Finally, we show a new construction that is provably secure in the random oracle (or random
function) model. Thus requiring less structure on the ideal function
The re-identification risk of Canadians from longitudinal demographics
<p>Abstract</p> <p>Background</p> <p>The public is less willing to allow their personal health information to be disclosed for research purposes if they do not trust researchers and how researchers manage their data. However, the public is more comfortable with their data being used for research if the risk of re-identification is low. There are few studies on the risk of re-identification of Canadians from their basic demographics, and no studies on their risk from their longitudinal data. Our objective was to estimate the risk of re-identification from the basic cross-sectional and longitudinal demographics of Canadians.</p> <p>Methods</p> <p>Uniqueness is a common measure of re-identification risk. Demographic data on a 25% random sample of the population of Montreal were analyzed to estimate population uniqueness on postal code, date of birth, and gender as well as their generalizations, for periods ranging from 1 year to 11 years.</p> <p>Results</p> <p>Almost 98% of the population was unique on full postal code, date of birth and gender: these three variables are effectively a unique identifier for Montrealers. Uniqueness increased for longitudinal data. Considerable generalization was required to reach acceptably low uniqueness levels, especially for longitudinal data. Detailed guidelines and disclosure policies on how to ensure that the re-identification risk is low are provided.</p> <p>Conclusions</p> <p>A large percentage of Montreal residents are unique on basic demographics. For non-longitudinal data sets, the three character postal code, gender, and month/year of birth represent sufficiently low re-identification risk. Data custodians need to generalize their demographic information further for longitudinal data sets.</p
Revisiting Single-server Algorithms for Outsourcing Modular Exponentiation
We investigate the problem of securely outsourcing modular exponentiations to a single, malicious computational resource. We revisit recently proposed schemes using single server and analyse them against two fundamental security properties, namely privacy of inputs and verifiability of outputs. Interestingly, we observe that the chosen schemes do not appear to meet both the security properties. In fact we present a simple polynomial-time attack on each algorithm, allowing the malicious server either to recover a secret input or to convincingly fool the client with wrong outputs.
Then we provide a fix to the identified problem in the ExpSOS scheme. With our fix and without pre-processing, the improved scheme becomes the best to-date outsourcing scheme for single-server case. Finally we present the first precomputation-free single-server algorithm, \pi ExpSOS for simultaneous exponentiations
cMix: Mixing with Minimal Real-Time Asymmetric Cryptographic Operations
We introduce cMix, a new approach to anonymous communications.
Through a precomputation, the core cMix protocol eliminates all expensive realtime
public-key operations --- at the senders, recipients and mixnodes --- thereby
decreasing real-time cryptographic latency and lowering computational costs for
clients. The core real-time phase performs only a few fast modular multiplications.
In these times of surveillance and extensive profiling there is a great need for an
anonymous communication system that resists global attackers.
One widely recognized
solution to the challenge of traffic analysis is a mixnet, which anonymizes
a batch of messages by sending the batch through a fixed cascade of mixnodes.
Mixnets can offer excellent privacy guarantees, including unlinkability of sender
and receiver, and resistance to many traffic-analysis attacks that undermine many
other approaches including onion routing. Existing mixnet designs, however, suffer
from high latency in part because of the need for real-time public-key operations.
Precomputation greatly improves the real-time performance of cMix, while
its fixed cascade of mixnodes yields the strong anonymity guarantees of mixnets.
cMix is unique in not requiring any real-time public-key operations by users.
Consequently, cMix is the first mixing suitable for low latency chat for lightweight
devices.
Our presentation includes a specification of cMix, security arguments, anonymity
analysis, and a performance comparison with selected other approaches. We also
give benchmarks from our prototype
A Systematic Review of Re-Identification Attacks on Health Data
Privacy legislation in most jurisdictions allows the disclosure of health data for secondary purposes without patient consent if it is de-identified. Some recent articles in the medical, legal, and computer science literature have argued that de-identification methods do not provide sufficient protection because they are easy to reverse. Should this be the case, it would have significant and important implications on how health information is disclosed, including: (a) potentially limiting its availability for secondary purposes such as research, and (b) resulting in more identifiable health information being disclosed. Our objectives in this systematic review were to: (a) characterize known re-identification attacks on health data and contrast that to re-identification attacks on other kinds of data, (b) compute the overall proportion of records that have been correctly re-identified in these attacks, and (c) assess whether these demonstrate weaknesses in current de-identification methods.Searches were conducted in IEEE Xplore, ACM Digital Library, and PubMed. After screening, fourteen eligible articles representing distinct attacks were identified. On average, approximately a quarter of the records were re-identified across all studies (0.26 with 95% CI 0.046-0.478) and 0.34 for attacks on health data (95% CI 0-0.744). There was considerable uncertainty around the proportions as evidenced by the wide confidence intervals, and the mean proportion of records re-identified was sensitive to unpublished studies. Two of fourteen attacks were performed with data that was de-identified using existing standards. Only one of these attacks was on health data, which resulted in a success rate of 0.00013.The current evidence shows a high re-identification rate but is dominated by small-scale studies on data that was not de-identified according to existing standards. This evidence is insufficient to draw conclusions about the efficacy of de-identification methods
- …