1,009 research outputs found
Verified Correctness and Security of mbedTLS HMAC-DRBG
We have formalized the functional specification of HMAC-DRBG (NIST 800-90A),
and we have proved its cryptographic security--that its output is
pseudorandom--using a hybrid game-based proof. We have also proved that the
mbedTLS implementation (C program) correctly implements this functional
specification. That proof composes with an existing C compiler correctness
proof to guarantee, end-to-end, that the machine language program gives strong
pseudorandomness. All proofs (hybrid games, C program verification, compiler,
and their composition) are machine-checked in the Coq proof assistant. Our
proofs are modular: the hybrid game proof holds on any implementation of
HMAC-DRBG that satisfies our functional specification. Therefore, our
functional specification can serve as a high-assurance reference.Comment: Appearing in CCS '1
On optimal language compression for sets in PSPACE/poly
We show that if DTIME[2^O(n)] is not included in DSPACE[2^o(n)], then, for
every set B in PSPACE/poly, all strings x in B of length n can be represented
by a string compressed(x) of length at most log(|B^{=n}|)+O(log n), such that a
polynomial-time algorithm, given compressed(x), can distinguish x from all the
other strings in B^{=n}. Modulo the O(log n) additive term, this achieves the
information-theoretic optimum for string compression. We also observe that
optimal compression is not possible for sets more complex than PSPACE/poly
because for any time-constructible superpolynomial function t, there is a set A
computable in space t(n) such that at least one string x of length n requires
compressed(x) to be of length 2 log(|A^=n|).Comment: submitted to Theory of Computing System
The Taming of QCD by Fortran 90
We implement lattice QCD using the Fortran 90 language. We have designed
machine independent modules that define fields (gauge, fermions, scalars,
etc...) and have defined overloaded operators for all possible operations
between fields, matrices and numbers. With these modules it is very simple to
write QCD programs. We have also created a useful compression standard for
storing the lattice configurations, a parallel implementation of the random
generators, an assignment that does not require temporaries, and a machine
independent precision definition. We have tested our program on parallel and
single processor supercomputers obtaining excellent performances.Comment: Talk presented at LATTICE96 (algorithms) 3 pages, no figures, LATEX
file with ESPCRC2 style. More information available at:
http://hep.bu.edu/~leviar/qcdf90.htm
Algorithmic Information Theory and Foundations of Probability
The use of algorithmic information theory (Kolmogorov complexity theory) to
explain the relation between mathematical probability theory and `real world'
is discussed
Conspiracies between learning algorithms, circuit lower bounds, and pseudorandomness
We prove several results giving new and stronger connections between learning theory, circuit
complexity and pseudorandomness. Let C be any typical class of Boolean circuits, and C[s(n)]
denote n-variable C-circuits of size ≤ s(n). We show:
Learning Speedups. If C[poly(n)] admits a randomized weak learning algorithm under the
uniform distribution with membership queries that runs in time 2n/nω(1), then for every k ≥ 1
and ε > 0 the class C[n
k
] can be learned to high accuracy in time O(2n
ε
). There is ε > 0 such that
C[2n
ε
] can be learned in time 2n/nω(1) if and only if C[poly(n)] can be learned in time 2(log n)
O(1)
.
Equivalences between Learning Models. We use learning speedups to obtain equivalences
between various randomized learning and compression models, including sub-exponential
time learning with membership queries, sub-exponential time learning with membership and
equivalence queries, probabilistic function compression and probabilistic average-case function
compression.
A Dichotomy between Learnability and Pseudorandomness. In the non-uniform setting,
there is non-trivial learning for C[poly(n)] if and only if there are no exponentially secure
pseudorandom functions computable in C[poly(n)].
Lower Bounds from Nontrivial Learning. If for each k ≥ 1, (depth-d)-C[n
k
] admits a
randomized weak learning algorithm with membership queries under the uniform distribution
that runs in time 2n/nω(1), then for each k ≥ 1, BPE * (depth-d)-C[n
k
]. If for some ε > 0 there
are P-natural proofs useful against C[2n
ε
], then ZPEXP * C[poly(n)].
Karp-Lipton Theorems for Probabilistic Classes. If there is a k > 0 such that BPE ⊆
i.o.Circuit[n
k
], then BPEXP ⊆ i.o.EXP/O(log n). If ZPEXP ⊆ i.o.Circuit[2n/3
], then ZPEXP ⊆
i.o.ESUBEXP.
Hardness Results for MCSP. All functions in non-uniform NC1
reduce to the Minimum
Circuit Size Problem via truth-table reductions computable by TC0
circuits. In particular, if
MCSP ∈ TC0
then NC1 = TC0
- …