175 research outputs found
Semantically Secure Anonymity: Foundations of Re-encryption
The notion of universal re-encryption is an established primitive
used in the design of many anonymity protocols. It allows anyone
to randomize a ciphertext without changing its size, without first
decrypting it, and without knowing who the receiver is (i.e., not
knowing the public key used to create it).
By design it prevents the randomized ciphertext from being
correlated with the original ciphertext.
We revisit and analyze the security
foundation of universal re-encryption and show a subtlety in it,
namely, that it does not require that the encryption function
achieve key anonymity. Recall that the encryption function is
different from the re-encryption function.
We demonstrate this subtlety by constructing a cryptosystem that satisfies the
established definition of a universal cryptosystem but that has an encryption
function that does not achieve key anonymity, thereby instantiating the gap in
the definition of security of universal re-encryption. We note that the
gap in the definition carries over to a set of applications
that rely on universal re-encryption, applications in the original
paper on universal re-encryption and also follow-on work.
This shows that the original definition needs to be corrected
and it shows that it had a knock-on
effect that negatively impacted security in later work.
We then introduce a new definition that includes
the properties that are needed for a re-encryption cryptosystem to achieve
key anonymity in both the encryption function and the re-encryption
function, building on Goldwasser and Micali\u27s semantic security and
the original key anonymity notion of Bellare, Boldyreva, Desai, and Pointcheval.
Omitting any of the properties in our definition leads to a problem.
We also introduce a new generalization of the Decision
Diffie-Hellman (DDH) random self-reduction and use it, in turn, to prove
that the original ElGamal-based universal cryptosystem of Golle et al
is secure under our revised security definition.
We apply our new DDH reduction
technique to give the first proof in the standard model that ElGamal-based
incomparable public keys achieve key anonymity under DDH.
We present a novel secure Forward-Anonymous Batch Mix
as a new application
ONLINE COLLABORATION FOR SOUTH-NORTH HISTORIC SITE RECORDING TRAINING OF EMERGING PROFESSIONALS
This contribution offers insights into delivering a Historic Site Recording course entirely over the Internet using video conferencing and sharing tools. The opportunities and challenges will be described, and the approaches used to ensure meeting realistic learning outcomes by offering a meaningful student experience will provide digital tools and cloud services. The classroom was staged at the students’ homes. Immediate surroundings of their countries in Latin America (Argentina, Bolivia, Chile, Guatemala, Peru, and Mexico), and the teachers were based in Santiago (Chile), Ibague (Colombia), Barcelona (Spain), and Ottawa (Canada) and video conferencing, collaboration tools and social media made the connections. Two introductory courses for 13 weeks were delivered, followed by an advanced course in heritage recording tools. At the end of the introductory course, students provided a heritage recording proposal for a site in their own countries
Strengthening Access Control Encryption
Access control encryption (ACE) was proposed by Damgård et al. to enable the control of information flow between several parties according to a given policy specifying which parties are, or are not, allowed to communicate. By involving a special party, called the sanitizer, policy-compliant communication is enabled while policy-violating communication is prevented, even if sender and receiver are dishonest. To allow outsourcing of the sanitizer, the secrecy of the message contents and the anonymity of the involved communication partners is guaranteed.
This paper shows that in order to be resilient against realistic attacks, the security definition of ACE must be considerably strengthened in several ways. A new, substantially stronger security definition is proposed, and an ACE scheme is constructed which provably satisfies the strong definition under standard assumptions.
Three aspects in which the security of ACE is strengthened are as follows. First, CCA security (rather than only CPA security) is guaranteed, which is important since senders can be dishonest in the considered setting. Second, the revealing of an (unsanitized) ciphertext (e.g., by a faulty sanitizer) cannot be exploited to communicate more in a policy-violating manner than the information contained in the ciphertext. We illustrate that this is not only a definitional subtlety by showing how in known ACE schemes, a single leaked unsanitized ciphertext allows for an arbitrary amount of policy-violating communication. Third, it is enforced that parties specified to receive a message according to the policy cannot be excluded from receiving it, even by a dishonest sender
New Techniques in Replica Encodings with Client Setup
A proof of replication system is a cryptographic primitive that allows
a server (or group of servers) to prove to a client that it is
dedicated to storing multiple copies or replicas of a file. Until
recently, all such protocols required fined-grained timing assumptions
on the amount of time it takes for a server to produce such replicas.
Damgård, Ganesh, and Orlandi (CRYPTO\u27 19) proposed a novel notion
that we will call proof of replication with client setup. Here, a
client first operates with secret coins to generate the replicas for a
file. Such systems do not inherently have to require fine-grained
timing assumptions. At the core of their solution to building proofs
of replication with client setup is an abstraction called replica
encodings. Briefly, these comprise a private coin scheme where a
client algorithm given a file can produce an encoding
. The encodings have the property that, given any encoding
, one can decode and retrieve the original file . Secondly,
if a server has significantly less than bit of storage,
it cannot reproduce encodings. The authors give a construction of
encodings from ideal permutations and trapdoor functions.
In this work, we make three central contributions:
1) Our first contribution is that we discover and demonstrate that
the security argument put forth by DGO19 is fundamentally flawed.
Briefly, the security argument makes assumptions on the attacker\u27s storage
behavior that does not capture general attacker strategies. We demonstrate
this issue by constructing a trapdoor permutation which is secure assuming
indistinguishability obfuscation, serves as a counterexample to their claim
(for the parameterization stated).
2) In our second contribution we show that the DGO19 construction is actually secure in the ideal
permutation model from any trapdoor permutation
when parameterized correctly. In particular, when the number of rounds in the construction is equal to
where is the security parameter, is the number of replicas and
is the number of blocks. To do so we build up a proof approach from the
ground up that accounts for general attacker storage behavior where
we create an analysis technique that we call ``sequence-then-switch\u27\u27.
3) Finally, we show a new construction that is provably secure in the random oracle (or random
function) model. Thus requiring less structure on the ideal function
Topology-Hiding Computation Beyond Semi-Honest Adversaries
Topology-hiding communication protocols allow a set of parties,
connected by an incomplete network with unknown communication graph,
where each party only knows its neighbors, to construct a complete
communication network such that the network topology remains hidden
even from a powerful adversary who can corrupt parties. This
communication network can then be used to perform arbitrary tasks, for
example secure multi-party computation, in a topology-hiding manner.
Previously proposed protocols could only tolerate passive
corruption. This paper proposes protocols that can also tolerate
fail-corruption (i.e., the adversary can crash any party at
any point in time) and so-called semi-malicious corruption (i.e., the
adversary can control a corrupted party\u27s randomness), without leaking
more than an arbitrarily small fraction of a bit of information about
the topology. A small-leakage protocol was recently proposed by Ball et al. [Eurocrypt\u2718], but only under the unrealistic set-up assumption that each party has a trusted hardware module containing secret correlated pre-set keys, and with the further two restrictions that only passively corrupted parties can be crashed by the adversary, and semi-malicious corruption is not tolerated. Since leaking a small
amount of information is unavoidable, as is the need to abort the
protocol in case of failures, our protocols seem to achieve the best
possible goal in a model with fail-corruption.
Further contributions of the paper are applications of the protocol to
obtain secure MPC protocols, which requires a way to bound the
aggregated leakage when multiple small-leakage protocols are
executed in parallel or sequentially. Moreover, while previous
protocols are based on the DDH assumption, a new so-called PKCR
public-key encryption scheme based on the LWE assumption is proposed,
allowing to base topology-hiding computation on LWE. Furthermore, a
protocol using fully-homomorphic encryption achieving very low round
complexity is proposed
Supporting Publication and Subscription Confidentiality in Pub/Sub Networks
The publish/subscribe model offers a loosely-coupled communication paradigm where applications interact indirectly and asynchronously. Publisher applications generate events that are sent to interested applications through a network of brokers. Subscriber applications express their interest by specifying filters that brokers can use for routing the events. Supporting confidentiality of messages being exchanged is still challenging. First of all, it is desirable that any scheme used for protecting the confidentiality of both the events and filters should not require the publishers and subscribers to share secret keys. In fact, such a restriction is against the loose-coupling of the model. Moreover, such a scheme should not restrict the expressiveness of filters and should allow the broker to perform event filtering to route the events to the interested parties. Existing solutions do not fully address those issues. In this paper, we provide a novel scheme that supports (i) confidentiality for events and filters; (ii) filters can express very complex constraints on events even if brokers are not able to access any information on both events and filters; (iii) and finally it does not require publishers and subscribers to share keys
The re-identification risk of Canadians from longitudinal demographics
<p>Abstract</p> <p>Background</p> <p>The public is less willing to allow their personal health information to be disclosed for research purposes if they do not trust researchers and how researchers manage their data. However, the public is more comfortable with their data being used for research if the risk of re-identification is low. There are few studies on the risk of re-identification of Canadians from their basic demographics, and no studies on their risk from their longitudinal data. Our objective was to estimate the risk of re-identification from the basic cross-sectional and longitudinal demographics of Canadians.</p> <p>Methods</p> <p>Uniqueness is a common measure of re-identification risk. Demographic data on a 25% random sample of the population of Montreal were analyzed to estimate population uniqueness on postal code, date of birth, and gender as well as their generalizations, for periods ranging from 1 year to 11 years.</p> <p>Results</p> <p>Almost 98% of the population was unique on full postal code, date of birth and gender: these three variables are effectively a unique identifier for Montrealers. Uniqueness increased for longitudinal data. Considerable generalization was required to reach acceptably low uniqueness levels, especially for longitudinal data. Detailed guidelines and disclosure policies on how to ensure that the re-identification risk is low are provided.</p> <p>Conclusions</p> <p>A large percentage of Montreal residents are unique on basic demographics. For non-longitudinal data sets, the three character postal code, gender, and month/year of birth represent sufficiently low re-identification risk. Data custodians need to generalize their demographic information further for longitudinal data sets.</p
Crowd computing as a cooperation problem: an evolutionary approach
Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive-conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.This work is supported by the Cyprus Research Promotion Foundation grant TE/HPO/0609(BE)/05, the National Science Foundation (CCF-0937829, CCF-1114930), Comunidad de Madrid grant S2009TIC-1692 and MODELICO-CM, Spanish MOSAICO, PRODIEVO and RESINEE grants and MICINN grant TEC2011-29688-C02-01, and National Natural Science Foundation of China grant 61020106002.Publicad
- …