91 research outputs found
Simple Proofs of Space-Time and Rational Proofs of Storage
We introduce a new cryptographic primitive: Proofs of Space-Time (PoSTs) and construct an extremely simple, practical protocol for implementing these proofs. A PoST allows a prover to convince a verifier that she spent a ``space-time\u27\u27 resource (storing data---space---over a period of time).
Formally, we define the PoST resource as a trade-off between CPU work and space-time (under reasonable cost assumptions, a rational user will prefer to use the lower-cost space-time resource over CPU work).
Compared to a proof-of-work, a PoST requires less energy use, as the ``difficulty\u27\u27 can be increased by extending the time period over which data is stored without increasing computation costs.
Our definition is very similar to ``Proofs of Space\u27\u27 [ePrint 2013/796, 2013/805] but, unlike the previous definitions, takes into account amortization attacks and storage duration. Moreover, our protocol uses a very different (and much simpler) technique, making use of the fact that we explicitly allow a space-time tradeoff, and doesn\u27t require any non-standard assumptions (beyond random oracles). Unlike previous constructions, our protocol allows incremental difficulty adjustment, which can gracefully handle increases in the price of storage compared to CPU work. In addition, we show how, in a cryptocurrency context, the parameters of the scheme can be adjusted using a market-based mechanism, similar in spirit to the difficulty adjustment for PoW protocols
New Techniques in Replica Encodings with Client Setup
A proof of replication system is a cryptographic primitive that allows
a server (or group of servers) to prove to a client that it is
dedicated to storing multiple copies or replicas of a file. Until
recently, all such protocols required fined-grained timing assumptions
on the amount of time it takes for a server to produce such replicas.
Damgård, Ganesh, and Orlandi (CRYPTO\u27 19) proposed a novel notion
that we will call proof of replication with client setup. Here, a
client first operates with secret coins to generate the replicas for a
file. Such systems do not inherently have to require fine-grained
timing assumptions. At the core of their solution to building proofs
of replication with client setup is an abstraction called replica
encodings. Briefly, these comprise a private coin scheme where a
client algorithm given a file can produce an encoding
. The encodings have the property that, given any encoding
, one can decode and retrieve the original file . Secondly,
if a server has significantly less than bit of storage,
it cannot reproduce encodings. The authors give a construction of
encodings from ideal permutations and trapdoor functions.
In this work, we make three central contributions:
1) Our first contribution is that we discover and demonstrate that
the security argument put forth by DGO19 is fundamentally flawed.
Briefly, the security argument makes assumptions on the attacker\u27s storage
behavior that does not capture general attacker strategies. We demonstrate
this issue by constructing a trapdoor permutation which is secure assuming
indistinguishability obfuscation, serves as a counterexample to their claim
(for the parameterization stated).
2) In our second contribution we show that the DGO19 construction is actually secure in the ideal
permutation model from any trapdoor permutation
when parameterized correctly. In particular, when the number of rounds in the construction is equal to
where is the security parameter, is the number of replicas and
is the number of blocks. To do so we build up a proof approach from the
ground up that accounts for general attacker storage behavior where
we create an analysis technique that we call ``sequence-then-switch\u27\u27.
3) Finally, we show a new construction that is provably secure in the random oracle (or random
function) model. Thus requiring less structure on the ideal function
Supporting Publication and Subscription Confidentiality in Pub/Sub Networks
The publish/subscribe model offers a loosely-coupled communication paradigm where applications interact indirectly and asynchronously. Publisher applications generate events that are sent to interested applications through a network of brokers. Subscriber applications express their interest by specifying filters that brokers can use for routing the events. Supporting confidentiality of messages being exchanged is still challenging. First of all, it is desirable that any scheme used for protecting the confidentiality of both the events and filters should not require the publishers and subscribers to share secret keys. In fact, such a restriction is against the loose-coupling of the model. Moreover, such a scheme should not restrict the expressiveness of filters and should allow the broker to perform event filtering to route the events to the interested parties. Existing solutions do not fully address those issues. In this paper, we provide a novel scheme that supports (i) confidentiality for events and filters; (ii) filters can express very complex constraints on events even if brokers are not able to access any information on both events and filters; (iii) and finally it does not require publishers and subscribers to share keys
The re-identification risk of Canadians from longitudinal demographics
<p>Abstract</p> <p>Background</p> <p>The public is less willing to allow their personal health information to be disclosed for research purposes if they do not trust researchers and how researchers manage their data. However, the public is more comfortable with their data being used for research if the risk of re-identification is low. There are few studies on the risk of re-identification of Canadians from their basic demographics, and no studies on their risk from their longitudinal data. Our objective was to estimate the risk of re-identification from the basic cross-sectional and longitudinal demographics of Canadians.</p> <p>Methods</p> <p>Uniqueness is a common measure of re-identification risk. Demographic data on a 25% random sample of the population of Montreal were analyzed to estimate population uniqueness on postal code, date of birth, and gender as well as their generalizations, for periods ranging from 1 year to 11 years.</p> <p>Results</p> <p>Almost 98% of the population was unique on full postal code, date of birth and gender: these three variables are effectively a unique identifier for Montrealers. Uniqueness increased for longitudinal data. Considerable generalization was required to reach acceptably low uniqueness levels, especially for longitudinal data. Detailed guidelines and disclosure policies on how to ensure that the re-identification risk is low are provided.</p> <p>Conclusions</p> <p>A large percentage of Montreal residents are unique on basic demographics. For non-longitudinal data sets, the three character postal code, gender, and month/year of birth represent sufficiently low re-identification risk. Data custodians need to generalize their demographic information further for longitudinal data sets.</p
Crowd computing as a cooperation problem: an evolutionary approach
Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive-conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.This work is supported by the Cyprus Research Promotion Foundation grant TE/HPO/0609(BE)/05, the National Science Foundation (CCF-0937829, CCF-1114930), Comunidad de Madrid grant S2009TIC-1692 and MODELICO-CM, Spanish MOSAICO, PRODIEVO and RESINEE grants and MICINN grant TEC2011-29688-C02-01, and National Natural Science Foundation of China grant 61020106002.Publicad
A Systematic Review of Re-Identification Attacks on Health Data
Privacy legislation in most jurisdictions allows the disclosure of health data for secondary purposes without patient consent if it is de-identified. Some recent articles in the medical, legal, and computer science literature have argued that de-identification methods do not provide sufficient protection because they are easy to reverse. Should this be the case, it would have significant and important implications on how health information is disclosed, including: (a) potentially limiting its availability for secondary purposes such as research, and (b) resulting in more identifiable health information being disclosed. Our objectives in this systematic review were to: (a) characterize known re-identification attacks on health data and contrast that to re-identification attacks on other kinds of data, (b) compute the overall proportion of records that have been correctly re-identified in these attacks, and (c) assess whether these demonstrate weaknesses in current de-identification methods.Searches were conducted in IEEE Xplore, ACM Digital Library, and PubMed. After screening, fourteen eligible articles representing distinct attacks were identified. On average, approximately a quarter of the records were re-identified across all studies (0.26 with 95% CI 0.046-0.478) and 0.34 for attacks on health data (95% CI 0-0.744). There was considerable uncertainty around the proportions as evidenced by the wide confidence intervals, and the mean proportion of records re-identified was sensitive to unpublished studies. Two of fourteen attacks were performed with data that was de-identified using existing standards. Only one of these attacks was on health data, which resulted in a success rate of 0.00013.The current evidence shows a high re-identification rate but is dominated by small-scale studies on data that was not de-identified according to existing standards. This evidence is insufficient to draw conclusions about the efficacy of de-identification methods
How Global are Global Brands? An Empirical Brand Equity Analysis
The term 'global brand' has become widely used by the media and by consumers. Business week publishes annually its widely known ranking of the 'Best Global Brands' (with Coca-Cola as number 1 in the past years) and consumers on summer vacations purchase brands such as Heineken or Marlboro they are familiar with from their home country. Although media and consumers call these brands 'global' and centralized marketing departments manage these brands globally - are these 'global brands' really global? Are they really perceived everywhere in the same way by the customers? Can we talk about truly global brand equity? And if there were brand image differences between countries, which factors causes them? The authors conducted an empirical research during May and June 2009 with similarly aged University students (bachelor students at business school) in Germany (n=426) and Mexico (n=296). The goal was to identify if brand awareness rates differ between Germans and Mexicans, if the brand image of Apple iPod is perceived in the same way in Germany and in Mexico and what influencing factors might have an impact on any brand image discrepancy between the countries. Results prove that brand recall rates differ between the two countries (with higher rates in Mexico) as well as brand image attributes vary significantly (28 out of 34 brand image attributes are significantly different between Germany and Mexico), with Mexico showing higher levels of favorable brand image attributes. Key influencing factors on the different brand image perceptions are perceived quality, satisfaction and the influence of reference groups (such as friends and family). The results suggest that so-called 'global brands' are not perceived the same way in Germany and Mexico. As a consequence, brand management using standardized marketing instruments for its presumable 'global brands' might be better off with a more differentiated approach that takes account a specific local brand image
- …