2,849 research outputs found
Secret-Sharing for NP
A computational secret-sharing scheme is a method that enables a dealer, that
has a secret, to distribute this secret among a set of parties such that a
"qualified" subset of parties can efficiently reconstruct the secret while any
"unqualified" subset of parties cannot efficiently learn anything about the
secret. The collection of "qualified" subsets is defined by a Boolean function.
It has been a major open problem to understand which (monotone) functions can
be realized by a computational secret-sharing schemes. Yao suggested a method
for secret-sharing for any function that has a polynomial-size monotone circuit
(a class which is strictly smaller than the class of monotone functions in P).
Around 1990 Rudich raised the possibility of obtaining secret-sharing for all
monotone functions in NP: In order to reconstruct the secret a set of parties
must be "qualified" and provide a witness attesting to this fact.
Recently, Garg et al. (STOC 2013) put forward the concept of witness
encryption, where the goal is to encrypt a message relative to a statement "x
in L" for a language L in NP such that anyone holding a witness to the
statement can decrypt the message, however, if x is not in L, then it is
computationally hard to decrypt. Garg et al. showed how to construct several
cryptographic primitives from witness encryption and gave a candidate
construction.
One can show that computational secret-sharing implies witness encryption for
the same language. Our main result is the converse: we give a construction of a
computational secret-sharing scheme for any monotone function in NP assuming
witness encryption for NP and one-way functions. As a consequence we get a
completeness theorem for secret-sharing: computational secret-sharing scheme
for any single monotone NP-complete function implies a computational
secret-sharing scheme for every monotone function in NP
Modulus Computational Entropy
The so-called {\em leakage-chain rule} is a very important tool used in many
security proofs. It gives an upper bound on the entropy loss of a random
variable in case the adversary who having already learned some random
variables correlated with , obtains some further
information about . Analogously to the information-theoretic
case, one might expect that also for the \emph{computational} variants of
entropy the loss depends only on the actual leakage, i.e. on .
Surprisingly, Krenn et al.\ have shown recently that for the most commonly used
definitions of computational entropy this holds only if the computational
quality of the entropy deteriorates exponentially in
. This means that the current standard definitions
of computational entropy do not allow to fully capture leakage that occurred
"in the past", which severely limits the applicability of this notion.
As a remedy for this problem we propose a slightly stronger definition of the
computational entropy, which we call the \emph{modulus computational entropy},
and use it as a technical tool that allows us to prove a desired chain rule
that depends only on the actual leakage and not on its history. Moreover, we
show that the modulus computational entropy unifies other,sometimes seemingly
unrelated, notions already studied in the literature in the context of
information leakage and chain rules. Our results indicate that the modulus
entropy is, up to now, the weakest restriction that guarantees that the chain
rule for the computational entropy works. As an example of application we
demonstrate a few interesting cases where our restricted definition is
fulfilled and the chain rule holds.Comment: Accepted at ICTS 201
Modifications of Gait as Predictors of Natural Osteoarthritis Progression in STR/Ort Mice
OBJECTIVE: Osteoarthritis (OA) is a common chronic disease for which disease-modifying therapies are not currently available. Studies to seek new targets for slowing the progress of OA rely on mouse models, but these do not allow for longitudinal monitoring of disease development. This study was undertaken to determine whether gait can be used to measure disease severity in the STR/Ort mouse model of spontaneous OA and whether gait changes are related to OA joint pain. METHODS: Gait was monitored using a treadmill-based video system. Correlations between OA severity and gait at 3 treadmill speeds were assessed in STR/Ort mice. Gait and pain behaviors of STR/Ort mice and control CBA mice were analyzed longitudinally, with monthly assessments. RESULTS: The best speed to identify paw area changes associated with OA severity in STR/Ort mice was found to be 17 cm · seconds(−1). Paw area was modified with age in CBA and STR/Ort mice, but this began earlier in STR/Ort mice and correlated with the onset of OA at 20 weeks of age. In addition, task noncompliance appeared at 20 weeks. Surprisingly, STR/Ort mice did not show any signs of pain with OA development, even when treated with the opioid antagonist naloxone, but did exhibit normal pain behaviors in response to complete Freund's adjuvant–induced arthritis. CONCLUSION: The present results identify an animal model in which OA severity and OA pain can be studied in isolation from one another. The findings suggest that paw area and treadmill noncompliance may be useful tools to longitudinally monitor nonpainful OA development in STR/Ort mice. This will help in providing a noninvasive means of assessing new therapies to slow the progression of OA
Private Outsourcing of Polynomial Evaluation and Matrix Multiplication using Multilinear Maps
{\em Verifiable computation} (VC) allows a computationally weak client to
outsource the evaluation of a function on many inputs to a powerful but
untrusted server. The client invests a large amount of off-line computation and
gives an encoding of its function to the server. The server returns both an
evaluation of the function on the client's input and a proof such that the
client can verify the evaluation using substantially less effort than doing the
evaluation on its own. We consider how to privately outsource computations
using {\em privacy preserving} VC schemes whose executions reveal no
information on the client's input or function to the server. We construct VC
schemes with {\em input privacy} for univariate polynomial evaluation and
matrix multiplication and then extend them such that the {\em function privacy}
is also achieved. Our tool is the recently developed {mutilinear maps}. The
proposed VC schemes can be used in outsourcing {private information retrieval
(PIR)}.Comment: 23 pages, A preliminary version appears in the 12th International
Conference on Cryptology and Network Security (CANS 2013
Statistically-secure ORAM with Overhead
We demonstrate a simple, statistically secure, ORAM with computational
overhead ; previous ORAM protocols achieve only
computational security (under computational assumptions) or require
overheard. An additional benefit of our ORAM is its
conceptual simplicity, which makes it easy to implement in both software and
(commercially available) hardware.
Our construction is based on recent ORAM constructions due to Shi, Chan,
Stefanov, and Li (Asiacrypt 2011) and Stefanov and Shi (ArXiv 2012), but with
some crucial modifications in the algorithm that simplifies the ORAM and enable
our analysis. A central component in our analysis is reducing the analysis of
our algorithm to a "supermarket" problem; of independent interest (and of
importance to our analysis,) we provide an upper bound on the rate of "upset"
customers in the "supermarket" problem
Quantum Lightning Never Strikes the Same State Twice
Public key quantum money can be seen as a version of the quantum no-cloning
theorem that holds even when the quantum states can be verified by the
adversary. In this work, investigate quantum lightning, a formalization of
"collision-free quantum money" defined by Lutomirski et al. [ICS'10], where
no-cloning holds even when the adversary herself generates the quantum state to
be cloned. We then study quantum money and quantum lightning, showing the
following results:
- We demonstrate the usefulness of quantum lightning by showing several
potential applications, such as generating random strings with a proof of
entropy, to completely decentralized cryptocurrency without a block-chain,
where transactions is instant and local.
- We give win-win results for quantum money/lightning, showing that either
signatures/hash functions/commitment schemes meet very strong recently proposed
notions of security, or they yield quantum money or lightning.
- We construct quantum lightning under the assumed multi-collision resistance
of random degree-2 systems of polynomials.
- We show that instantiating the quantum money scheme of Aaronson and
Christiano [STOC'12] with indistinguishability obfuscation that is secure
against quantum computers yields a secure quantum money schem
Evidence for a long-lived superheavy nucleus with atomic mass number A=292 and atomic number Z=~122 in natural Th
Evidence for the existence of a superheavy nucleus with atomic mass number
A=292 and abundance (1-10)x10^(-12) relative to 232Th has been found in a study
of natural Th using inductively coupled plasma-sector field mass spectrometry.
The measured mass matches the predictions [1,2] for the mass of an isotope with
atomic number Z=122 or a nearby element. Its estimated half-life of t1/2 >=
10^8 y suggests that a long-lived isomeric state exists in this isotope. The
possibility that it might belong to a new class of long-lived high spin super-
and hyperdeformed isomeric states is discussed.[3-6]Comment: 14 pages, 5 figure
Predicate Encryption for Circuits from LWE
In predicate encryption, a ciphertext is associated with descriptive attribute values x in addition to a plaintext μ, and a secret key is associated with a predicate f. Decryption returns plaintext μ if and only if f(x)=1. Moreover, security of predicate encryption guarantees that an adversary learns nothing about the attribute x or the plaintext μ from a ciphertext, given arbitrary many secret keys that are not authorized to decrypt the ciphertext individually.
We construct a leveled predicate encryption scheme for all circuits, assuming the hardness of the subexponential learning with errors (LWE) problem. That is, for any polynomial function d=d(λ), we construct a predicate encryption scheme for the class of all circuits with depth bounded by d(λ), where λ is the security parameter.Microsoft Corporation (PhD Fellowship)Northrop Grumman Cybersecurity Research ConsortiumUnited States. Defense Advanced Research Projects Agency (Grant FA8750-11-2-0225)National Science Foundation (U.S.) (Awards CNS-1350619)National Science Foundation (U.S.) (Awards CNS-1413920)Alfred P. Sloan Foundation (Fellowship)Microsoft (Faculty Fellowship
Asymptotically Tight Bounds for Composing ORAM with PIR
Oblivious RAM (ORAM) is a cryptographic primitive that allows a trusted client to outsource storage to an untrusted server while hiding the client\u27s memory access patterns to the server. The last three decades of research on ORAMs have reduced the bandwidth blowup of ORAM schemes from to . However, all schemes that achieve a bandwidth blowup smaller than use expensive computations such as homomorphic encryptions. In this paper, we achieve a sub-logarithmic bandwidth blowup of (where is a free parameter) without using expensive computation. We do so by using a -ary tree and a two server private information retrieval (PIR) protocol based on inexpensive XOR operations at the servers. We also show a lower bound on bandwidth blowup in the modified model involving PIR operations. Here, is the number of blocks stored by the client and is the number blocks on which PIR operations are performed. Our construction matches this lower bound implying that the lower bound is tight for certain parameter ranges. Finally, we show that C-ORAM (CCS\u2715) and CHf-ORAM violate the lower bound. Combined with concrete attacks on C-ORAM/CHf-ORAM, we claim that there exist security flaws in these constructions
Large FHE Gates from tensored homomorphic accumulator
The main bottleneck of all known Fully Homomorphic Encryption schemes lies in the bootstrapping procedure invented by Gentry (STOC’09). The cost of this procedure can be mitigated either using Homomorphic SIMD techniques, or by performing larger computation per bootstrapping procedure.In this work, we propose new techniques allowing to perform more operations per bootstrapping in FHEW-type schemes (EUROCRYPT’13). While maintaining the quasi-quadratic Õ(n2) complexity of the whole cycle, our new scheme allows to evaluate gates with Ω(log n) input bits, which constitutes a quasi-linear speed-up. Our scheme is also very well adapted to large threshold gates, natively admitting up to Ω(n) inputs. This could be helpful for homomorphic evaluation of neural networks.Our theoretical contribution is backed by a preliminary prototype implementation, which can perform 6-to-6 bit gates in less than 10s on a single core, as well as threshold gates over 63 input bits even faster.<p
- …