42 research outputs found
Prochlo: Strong Privacy for Analytics in the Crowd
The large-scale monitoring of computer users' software activities has become
commonplace, e.g., for application telemetry, error reporting, or demographic
profiling. This paper describes a principled systems architecture---Encode,
Shuffle, Analyze (ESA)---for performing such monitoring with high utility while
also protecting user privacy. The ESA design, and its Prochlo implementation,
are informed by our practical experiences with an existing, large deployment of
privacy-preserving software monitoring.
(cont.; see the paper
Time-Space Tradeoffs and Short Collisions in Merkle-DamgÄrd Hash Functions
We study collision-finding against Merkle-DamgÄrd hashing in the random-oracle model by adversaries with an arbitrary -bit auxiliary advice input about the random oracle and queries. Recent work showed that such adversaries can find collisions (with respect to a random IV) with advantage , where is the output length, beating the birthday bound by a factor of . These attacks were shown to be optimal.
We observe that the collisions produced are very long, on the order blocks, which would limit their practical relevance. We prove several results related to improving these attacks to find short collisions. We first exhibit a simple attack for finding -block-long collisions achieving advantage . We then study if this attack is optimal. We show that the prior technique based on the bit-fixing model (used for the bound) provably cannot reach this bound, and towards a general result we prove there are qualitative jumps in the optimal attacks for finding length , length , and unbounded-length collisions. Namely, the optimal attacks achieve (up to logarithmic factors) order of , and advantage. We also give an upper bound on the advantage of a restricted class of short-collision finding attacks via a new analysis on the growth of trees in random functional graphs that may be of independent interest
Robust, low-cost, auditable random number generation for embedded system security
This paper presents an architecture for a discrete, high-entropy hardware random number generator. Because it is constructed out of simple hardware components, its operation is transparent and auditable. Using avalanche noise, a nondeterministic physical phenomenon, the circuit is inherently probabilistic and resists adversarial control. Furthermore, because it compares the outputs from two matched noise sources, it rejects environmental disturbances like power supply ripple. The resulting hardware produces more than 0.98 bits of entropy per sample, is inexpensive, has a small footprint, and can be disabled to conserve power when not in use
Secure Single-Server Aggregation with (Poly)Logarithmic Overhead
Secure aggregation is a cryptographic primitive that enables a server to learn the sum of the vector inputs of many clients. Bonawitz et al. (CCS 2017)
presented a construction that incurs computation and communication for each client linear in the number of parties. While this functionality
enables a broad range of privacy preserving computational tasks, scaling concerns limit its scope of use.
We present the first constructions for secure aggregation that achieve polylogarithmic communication and computation per client.
Our constructions provide security in the semi-honest and the semi-malicious setting where the adversary controls the server and a -fraction of the clients, and correctness with up to -fraction dropouts among the clients. Our constructions show how to replace the
complete communication graph of Bonawitz et al., which entails the linear overheads, with a -regular graph of logarithmic degree while maintaining the security guarantees.
Beyond improving the known asymptotics for secure aggregation, our constructions also achieve very efficient concrete parameters. The semi-honest secure aggregation can handle a billion clients at the per client cost of the protocol of Bonawitz et al. for a thousand clients. In the semi-malicious setting with clients, each client needs to communicate only with of the clients to have a guarantee that its input has been added together with the inputs of at least other clients, while withstanding up to corrupt clients and dropouts.
We also show an application of secure aggregation to the task of secure shuffling which enables the first cryptographically secure instantiation of the shuffle model of differential privacy
Ciphertext Expansion in Limited-Leakage Order-Preserving Encryption: A Tight Computational Lower Bound
Order-preserving encryption emerged as a key ingredient underlying the security of practical database management systems. Boldyreva et al. (EUROCRYPT \u2709) initiated the study of its security by introducing two natural notions of security. They proved that their first notion, a ``best-possible\u27\u27 relaxation of semantic security allowing ciphertexts to reveal the ordering of their corresponding plaintexts, is not realizable. Later on Boldyreva et al. (CRYPTO \u2711) proved that any scheme satisfying their second notion, indistinguishability from a random order-preserving function, leaks about half of the bits of a random plaintext.
This unsettling state of affairs was recently changed by Chenette et al. (FSE \u2716), who relaxed the above ``best-possible\u27\u27 notion and constructed a scheme satisfying it based on any pseudorandom function. In addition to revealing the ordering of any two encrypted plaintexts, ciphertexts in their scheme reveal only the position of the most significant bit on which the plaintexts differ. A significant drawback of their scheme, however, is its substantial ciphertext expansion: Encrypting plaintexts of length bits results in ciphertexts of length bits, where determines the level of security (e.g., in practice).
In this work we prove a lower bound on the ciphertext expansion of any order-preserving encryption scheme satisfying the ``limited-leakage\u27\u27 notion of Chenette et al. with respect to non-uniform polynomial-time adversaries, matching the ciphertext expansion of their scheme up to lower-order terms. This improves a recent result of Cash and Zhang (ePrint \u2717), who proved such a lower bound for schemes satisfying this notion with respect to computationally-unbounded adversaries (capturing, for example, schemes whose security can be proved in the random-oracle model without relying on cryptographic assumptions). Our lower bound applies, in particular, to schemes whose security is proved in the standard model
Lower Bounds on the Time/Memory Tradeoff of Function Inversion
We study time/memory tradeoffs of function inversion: an algorithm, i.e., an inverter, equipped with an -bit advice on a randomly chosen function and using oracle queries to , tries to invert a randomly chosen output of , i.e., to find . Much progress was done regarding adaptive function inversion - the inverter is allowed to make adaptive oracle queries. Hellman [IEEE transactions on Information Theory \u2780] presented an adaptive inverter that inverts with high probability a random . Fiat and Naor [SICOMP \u2700] proved that for any with (ignoring low-order terms), an -advice, -query variant of Hellman\u27s algorithm inverts a constant fraction of the image points of any function. Yao [STOC \u2790] proved a lower bound of for this problem. Closing the gap between the above lower and upper bounds is a long-standing open question.
Very little is known for the non-adaptive variant of the question - the inverter chooses its queries in advance. The only known upper bounds, i.e., inverters, are the trivial ones (with ), and the only lower bound is the above bound of Yao. In a recent work, Corrigan-Gibbs and Kogan [TCC \u2719] partially justified the difficulty of finding lower bounds on non-adaptive inverters, showing that a lower bound on the time/memory tradeoff of non-adaptive inverters implies a lower bound on low-depth Boolean circuits. Bounds that, for a strong enough choice of parameters, are notoriously hard to prove.
We make progress on the above intriguing question, both for the adaptive and the non-adaptive case, proving the following lower bounds on restricted families of inverters:
- Linear-advice (adaptive inverter): If the advice string is a linear function of (e.g., , for some matrix , viewing as a vector in ), then . The bound generalizes to the case where the advice string of , i.e., the coordinate-wise addition of the truth tables of and , can be computed from the description of and by a low communication protocol.
- Affine non-adaptive decoders: If the non-adaptive inverter has an affine decoder - it outputs a linear function, determined by the advice string and the element to invert, of the query answers - then (regardless of ).
- Affine non-adaptive decision trees: If the non-adaptive inversion algorithm is a -depth affine decision tree - it outputs the evaluation of a decision tree whose nodes compute a linear function of the answers to the queries - and , then
Quantum Random Oracle Model with Auxiliary Input
The random oracle model (ROM) is an idealized model where hash functions are
modeled as random functions that are only accessible as oracles. Although the
ROM has been used for proving many cryptographic schemes, it has (at least)
two problems. First, the ROM does not capture quantum adversaries. Second, it
does not capture non-uniform adversaries that perform preprocessings. To deal
with these problems, Boneh et al. (Asiacrypt\u2711) proposed using the quantum
ROM (QROM) to argue post-quantum security, and Unruh (CRYPTO\u2707) proposed the
ROM with auxiliary input (ROM-AI) to argue security against preprocessing
attacks. However, to the best of our knowledge, no work has dealt with the
above two problems simultaneously.
In this paper, we consider a model that we call the QROM with (classical)
auxiliary input (QROM-AI) that deals with the above two problems
simultaneously and study security of cryptographic primitives in the model.
That is, we give security bounds for one-way functions, pseudorandom
generators, (post-quantum) pseudorandom functions, and (post-quantum) message
authentication codes in the QROM-AI.
We also study security bounds in the presence of quantum auxiliary inputs. In
other words, we show a security bound for one-wayness of random permutations
(instead of random functions) in the presence of quantum auxiliary inputs.
This resolves an open problem posed by Nayebi et al. (QIC\u2715). In a context of
complexity theory, this implies relative to a random permutation oracle, which also
answers an open problem posed by Aaronson (ToC\u2705)
Recommended from our members
The Distinction Between Fixed and Random Generators in Group-Based Assumptions
There is surprisingly little consensus on the precise role of the generator g in group-based assumptions such as DDH. Some works consider g to be a fixed part of the group description, while others take it to be random. We study this subtle distinction from a number of angles.
- In the generic group model, we demonstrate the plausibility of groups in which random-generator DDH (resp. CDH) is hard but fixed-generator DDH (resp. CDH) is easy. We observe that such groups have interesting cryptographic applications.
- We find that seemingly tight generic lower bounds for the Discrete-Log and CDH problems with preprocessing (Corrigan-Gibbs and Kogan, Eurocrypt 2018) are not tight in the sub-constant success probability regime if the generator is random. We resolve this by proving tight lower bounds for the random generator variants; our results formalize the intuition that using a random generator will reduce the effectiveness of preprocessing attacks.
- We observe that DDH-like assumptions in which exponents are drawn from low-entropy distributions are particularly sensitive to the fixed- vs. random-generator distinction. Most notably, we discover that the Strong Power DDH assumption of Komargodski and Yogev (Komargodski and Yogev, Eurocrypt 2018) used for non-malleable point obfuscation is in fact false precisely because it requires a fixed generator. In response, we formulate an alternative fixed-generator assumption that suffices for a new construction of non-malleable point obfuscation, and we prove the assumption holds in the generic group model. We also give a generic group proof for the security of fixed-generator, low-entropy DDH (Canetti, Crypto 1997)
Effects of fluoxetine on functional outcomes after acute stroke (FOCUS): a pragmatic, double-blind, randomised, controlled trial
Background
Results of small trials indicate that fluoxetine might improve functional outcomes after stroke. The FOCUS trial aimed to provide a precise estimate of these effects.
Methods
FOCUS was a pragmatic, multicentre, parallel group, double-blind, randomised, placebo-controlled trial done at 103 hospitals in the UK. Patients were eligible if they were aged 18 years or older, had a clinical stroke diagnosis, were enrolled and randomly assigned between 2 days and 15 days after onset, and had focal neurological deficits. Patients were randomly allocated fluoxetine 20 mg or matching placebo orally once daily for 6 months via a web-based system by use of a minimisation algorithm. The primary outcome was functional status, measured with the modified Rankin Scale (mRS), at 6 months. Patients, carers, health-care staff, and the trial team were masked to treatment allocation. Functional status was assessed at 6 months and 12 months after randomisation. Patients were analysed according to their treatment allocation. This trial is registered with the ISRCTN registry, number ISRCTN83290762.
Findings
Between Sept 10, 2012, and March 31, 2017, 3127 patients were recruited. 1564 patients were allocated fluoxetine and 1563 allocated placebo. mRS data at 6 months were available for 1553 (99·3%) patients in each treatment group. The distribution across mRS categories at 6 months was similar in the fluoxetine and placebo groups (common odds ratio adjusted for minimisation variables 0·951 [95% CI 0·839â1·079]; p=0·439). Patients allocated fluoxetine were less likely than those allocated placebo to develop new depression by 6 months (210 [13·43%] patients vs 269 [17·21%]; difference 3·78% [95% CI 1·26â6·30]; p=0·0033), but they had more bone fractures (45 [2·88%] vs 23 [1·47%]; difference 1·41% [95% CI 0·38â2·43]; p=0·0070). There were no significant differences in any other event at 6 or 12 months.
Interpretation
Fluoxetine 20 mg given daily for 6 months after acute stroke does not seem to improve functional outcomes. Although the treatment reduced the occurrence of depression, it increased the frequency of bone fractures. These results do not support the routine use of fluoxetine either for the prevention of post-stroke depression or to promote recovery of function.
Funding
UK Stroke Association and NIHR Health Technology Assessment Programme