502 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
TreePIR: Sublinear-Time and Polylog-Bandwidth Private Information Retrieval from DDH
In Private Information Retrieval (PIR), a client wishes to retrieve the value of an index from a public database of values without leaking information about the index . In their recent seminal work, Corrigan-Gibbs and Kogan (EUROCRYPT 2020) introduced the first two-server PIR protocol with sublinear amortized server time and sublinear, bandwidth. In a followup work, Shi et al. (CRYPTO 2021) reduced the bandwidth to polylogarithmic by proposing a construction based on privately puncturable pseudorandom functions, a primitive whose only construction known to date is based on heave cryptographic primitives. Partly because of this, their PIR protocol does not achieve concrete efficiency.
In this paper we propose TreePIR, a two-server PIR protocol with sublinear amortized server time and polylogarithmic bandwidth whose security can be based on just the DDH assumption. TreePIR can be partitioned in two phases, both sublinear: The first phase is remarkably simple and only requires pseudorandom generators. The second phase is a single-server PIR protocol on \emph{only} indices, for which we can use the protocol by D\ ottling et al. (CRYPTO 2019) based on DDH, or, for practical purposes, the most concretely efficient single-server PIR protocol. Not only does TreePIR achieve better asymptotics than previous approaches while resting on weaker cryptographic assumptions, but it also outperforms existing two-server PIR protocols in practice. The crux of our protocol is a new cryptographic primitive that we call weak privately puncturable pseudorandom functions, which we believe can have further applications
Randomness Tests for Binary Sequences
Cryptography is vital in securing sensitive information and maintaining privacy in the todayâs digital world. Though sometimes underestimated, randomness plays a key role in cryptography, generating unpredictable keys and other related material.
Hence, high-quality random number generators are a crucial element in building a secure cryptographic system. In dealing with randomness, two key capabilities are essential. First, creating strong random generators, that is, systems able to produce unpredictable and statistically independent numbers. Second, constructing validation systems to verify the quality of the generators.
In this dissertation, we focus on the second capability, specifically analyzing the concept of hypothesis test, a statistical inference model representing a basic tool for the statistical characterization of random processes. In the hypothesis testing framework, a central idea is the p-value, a numerical measure assigned to each sample generated from the random process under analysis, allowing to assess the plausibility of a hypothesis, usually referred to as the null hypothesis, about the random process on the basis of the observed data.
P-values are determined by the probability distribution associated with the null hypothesis. In the context of random number generators, this distribution is inherently discrete but in the literature it is commonly approximated by continuous distributions for ease of handling. However, analyzing in detail the discrete setting, we show that the mentioned approximation can lead to errors. As an example, we thoroughly examine the testing strategy for random number generators proposed by the National Institute of Standards and Technology (NIST) and demonstrate some inaccuracies in the suggested approach. Motivated by this finding, we define a new simple hypothesis test as a use case to propose and validate a methodology for assessing the definition and implementation correctness of hypothesis tests. Additionally, we present an abstract analysis of the hypothesis test model, which proves valuable in providing a more accurate conceptual framework within the discrete setting.
We believe that the results presented in this dissertation can contribute to a better understanding of how hypothesis tests operate in discrete cases, such as analyzing random number generators. In the demanding field of cryptography, even slight discrepancies between the expected and actual behavior of random generators can, in fact, have significant implications for data security
Evolving Secret Sharing Made Short
Evolving secret sharing (Komargodski, Naor, and Yogev, TCCâ16) generalizes the notion of secret sharing to the setting of evolving access structures, in which the share holders are added to the system in an online manner, and where the dealer does not know neither the access structure nor the maximum number of parties in advance. Here, the main difficulty is to distribute shares to the new players without updating the shares of old players; moreover, one would like to minimize the share size as a function of the number of players.
In this paper, we initiate a systematic study of evolving secret sharing in the computational setting, where the maximum number of parties is polynomial in the security parameter, but the dealer still does not know this value, neither it knows the access structure in advance. Moreover, the privacy guarantee only holds against computationally bounded adversaries corrupting an unauthorized subset of the players.
Our main result is that for many interesting, and practically relevant, evolving access structures (including graphs access structures, DNF and CNF formulas access structures, monotone circuits access structures, and threshold access structures), under standard hardness assumptions, there exist efficient secret sharing schemes with computational privacy and in which the shares are succinct (i.e., much smaller compared to the size of a natural computational representation of the evolving access structure)
FLUTE: Fast and Secure Lookup Table Evaluations (Full Version)
The concept of using Lookup Tables (LUTs) instead of Boolean circuits is well-known and been widely applied in a variety of applications, including FPGAs, image processing, and database management systems. In cryptography, using such LUTs instead of conventional gates like AND and XOR results in more compact circuits and has been shown to substantially improve online performance when evaluated with secure multi-party computation. Several recent works on secure floating-point computations and privacy-preserving machine learning inference rely heavily on existing LUT techniques. However, they suffer from either large overhead in the setup phase or subpar online performance.
We propose FLUTE, a novel protocol for secure LUT evaluation with good setup and online performance. In a two-party setting, we show that FLUTE matches or even outperforms the online performance of all prior approaches, while being competitive in terms of overall performance with the best prior LUT protocols. In addition, we provide an open-source implementation of FLUTE written in the Rust programming language, and implementations of the Boolean secure two-party computation protocols of ABY2.0 and silent OT. We find that FLUTE outperforms the state of the art by two orders of magnitude in the online phase while retaining similar overall communication
Composable Definitions of Long-Term Security for Commitment Schemes and their Applications
Was passiert, falls eine kryptographische Annahme als nicht mehr sicher gilt und in welcher Weise betrifft dies die Sicherheit von kryptographischen Protkollen?
In dieser Hinsicht mag man sich ĂŒberlegen, die Sicherheitsannahme zu aktualisieren und die Sicherheit des aktualisierten Protokolls inklusive der Aktualisierungsprozedur nachzuweisen. Wie jedoch lĂ€sst sich die Sicherheit des aktualisierten Protokolls und der Aktualisierungsprozedur nachweisen?
Eine Möglichkeit wĂ€re zu beweisen, dass das gegebene Protokoll nachweisbar langfristig UC-sicher ist, ein Sicherheitsbegriff bei dem angenommen wird dass der Angreifer nach ProtokollausfĂŒhrung unbeschrĂ€nkt ist und daher nach ProtokollausfĂŒhrung keine KomplexitĂ€tsannahmen gelten. Zudem wurden Unmöglichkeitsresultate gezeigt, insbesondere fĂŒr Commitmentprotokolle. Daher kann der Begriff der langfristigen UC-Sicherheit etwas zu stark sein, wenn man die Sicherheit gegenĂŒber Angreifern nachweisen möchte, die zwar wĂ€hrend der ProtokollausfĂŒhrung die RechenkapazitĂ€t erhöht, diese aber limitiert bleibt, auch nach der AusfĂŒhrung des Protkolls.
In dieser Arbeit definieren wir einen gelockerten Begriff der langfristigen UC-Sicherheit, den wir F^{post}-Sicherheit nennen.
DarĂŒber hinaus möchten wir zeigen, wie man ein F^{post}-sicheres Commitment-Schema verwenden kann, um einen Common Reference String (CRS) eines anderen Commitments zu aktualisieren
Immunizing Backdoored PRGs
A backdoored Pseudorandom Generator (PRG) is a PRG which looks pseudorandom to the outside world, but a saboteur can break PRG security by planting a backdoor into a seemingly honest choice of public parameters, , for the system. Backdoored PRGs became increasingly important due to revelations about NISTâs backdoored Dual EC PRG, and later results about its practical exploitability.
Motivated by this, at Eurocrypt\u2715 Dodis et al. [21] initiated the question of immunizing backdoored PRGs. A -immunization scheme repeatedly applies a post-processing function to the output of backdoored PRGs, to render any (unknown) backdoors provably useless. For , [21] showed that no deterministic immunization is possible, but then constructed seeded -immunizer either in the random oracle model, or under strong non-falsifiable assumptions. As our first result, we show that no seeded -immunization scheme can be black-box reduced to any efficiently falsifiable assumption.
This motivates studying -immunizers for , which have an additional advantage of being deterministic (i.e., seedless ). Indeed, prior work at CCS\u2717 [37] and CRYPTO\u2718 [7] gave supporting evidence that simple -immunizers might exist, albeit in slightly different settings. Unfortunately, we show that simple standard model proposals of [37, 7] (including the XOR function [7]) provably do not work in our setting. On a positive, we confirm the intuition of [37] that a (seedless) random oracle is a provably secure -immunizer. On a negative, no (seedless) -immunization scheme can be black-box reduced to any efficiently falsifiable assumption, at least for a large class of natural -immunizers which includes all cryptographic hash functions.
In summary, our results show that -immunizers occupy a peculiar place in the cryptographic world. While they likely exist, and can be made practical and efficient, it is unlikely one can reduce their security to a clean standard-model assumption
Certified Hardness vs. Randomness for Log-Space
Let be a language that can be decided in linear space and let
be any constant. Let be the exponential hardness
assumption that for every , membership in for inputs of
length~ cannot be decided by circuits of size smaller than .
We prove that for every function , computable
by a randomized logspace algorithm , there exists a deterministic logspace
algorithm (attempting to compute ), such that on every input of
length , the algorithm outputs one of the following:
1: The correct value .
2: The string: ``I am unable to compute because the hardness
assumption is false'', followed by a (provenly correct) circuit
of size smaller than for membership in for
inputs of length~, for some ; that is, a circuit that
refutes .
Our next result is a universal derandomizer for : We give a
deterministic algorithm that takes as an input a randomized logspace
algorithm and an input and simulates the computation of on ,
deteriministically. Under the widely believed assumption , the space
used by is at most (where is a constant depending
on~). Moreover, for every constant , if then the space used by is at most .
Finally, we prove that if optimal hitting sets for ordered branching programs
exist then there is a deterministic logspace algorithm that, given a black-box
access to an ordered branching program of size , estimates the
probability that accepts on a uniformly random input. This extends the
result of (Cheng and Hoza CCC 2020), who proved that an optimal hitting set
implies a white-box two-sided derandomization.Comment: Abstract shortened to fit arXiv requirement
On the Power of Regular and Permutation Branching Programs
We give new upper and lower bounds on the power of several restricted classes of arbitrary-order read-once branching programs (ROBPs) and standard-order ROBPs (SOBPs) that have received significant attention in the literature on pseudorandomness for space-bounded computation.
- Regular SOBPs of length n and width ?w(n+1)/2? can exactly simulate general SOBPs of length n and width w, and moreover an n/2-o(n) blow-up in width is necessary for such a simulation. Our result extends and simplifies prior average-case simulations (Reingold, Trevisan, and Vadhan (STOC 2006), Bogdanov, Hoza, Prakriya, and Pyne (CCC 2022)), in particular implying that weighted pseudorandom generators (Braverman, Cohen, and Garg (SICOMP 2020)) for regular SOBPs of width poly(n) or larger automatically extend to general SOBPs. Furthermore, our simulation also extends to general (even read-many) oblivious branching programs.
- There exist natural functions computable by regular SOBPs of constant width that are average-case hard for permutation SOBPs of exponential width. Indeed, we show that Inner-Product mod 2 is average-case hard for arbitrary-order permutation ROBPs of exponential width.
- There exist functions computable by constant-width arbitrary-order permutation ROBPs that are worst-case hard for exponential-width SOBPs.
- Read-twice permutation branching programs of subexponential width can simulate polynomial-width arbitrary-order ROBPs
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
- âŠ