4,197 research outputs found
Better Pseudorandom Generators from Milder Pseudorandom Restrictions
We present an iterative approach to constructing pseudorandom generators,
based on the repeated application of mild pseudorandom restrictions. We use
this template to construct pseudorandom generators for combinatorial rectangles
and read-once CNFs and a hitting set generator for width-3 branching programs,
all of which achieve near-optimal seed-length even in the low-error regime: We
get seed-length O(log (n/epsilon)) for error epsilon. Previously, only
constructions with seed-length O(\log^{3/2} n) or O(\log^2 n) were known for
these classes with polynomially small error.
The (pseudo)random restrictions we use are milder than those typically used
for proving circuit lower bounds in that we only set a constant fraction of the
bits at a time. While such restrictions do not simplify the functions
drastically, we show that they can be derandomized using small-bias spaces.Comment: To appear in FOCS 201
Improved Pseudorandom Generators from Pseudorandom Multi-Switching Lemmas
We give the best known pseudorandom generators for two touchstone classes in
unconditional derandomization: an -PRG for the class of size-
depth- circuits with seed length , and an -PRG for the class of -sparse
polynomials with seed length . These results bring the state of the art for
unconditional derandomization of these classes into sharp alignment with the
state of the art for computational hardness for all parameter settings:
improving on the seed lengths of either PRG would require breakthrough progress
on longstanding and notorious circuit lower bounds.
The key enabling ingredient in our approach is a new \emph{pseudorandom
multi-switching lemma}. We derandomize recently-developed
\emph{multi}-switching lemmas, which are powerful generalizations of
H{\aa}stad's switching lemma that deal with \emph{families} of depth-two
circuits. Our pseudorandom multi-switching lemma---a randomness-efficient
algorithm for sampling restrictions that simultaneously simplify all circuits
in a family---achieves the parameters obtained by the (full randomness)
multi-switching lemmas of Impagliazzo, Matthews, and Paturi [IMP12] and
H{\aa}stad [H{\aa}s14]. This optimality of our derandomization translates into
the optimality (given current circuit lower bounds) of our PRGs for
and sparse polynomials
Linear Transformations for Randomness Extraction
Information-efficient approaches for extracting randomness from imperfect
sources have been extensively studied, but simpler and faster ones are required
in the high-speed applications of random number generation. In this paper, we
focus on linear constructions, namely, applying linear transformation for
randomness extraction. We show that linear transformations based on sparse
random matrices are asymptotically optimal to extract randomness from
independent sources and bit-fixing sources, and they are efficient (may not be
optimal) to extract randomness from hidden Markov sources. Further study
demonstrates the flexibility of such constructions on source models as well as
their excellent information-preserving capabilities. Since linear
transformations based on sparse random matrices are computationally fast and
can be easy to implement using hardware like FPGAs, they are very attractive in
the high-speed applications. In addition, we explore explicit constructions of
transformation matrices. We show that the generator matrices of primitive BCH
codes are good choices, but linear transformations based on such matrices
require more computational time due to their high densities.Comment: 2 columns, 14 page
The Data Quality Concept of Accuracy in the Context of Public Use Data Sets
Like other data quality dimensions, the concept of accuracy is often adopted to characterise a particular data set. However, its common specification basically refers to statistical properties of estimators, which can hardly be proved by means of a single survey at hand. This ambiguity can be resolved by assigning 'accuracy' to survey processes that are known to affect these properties. In this contribution, we consider the sub-process of imputation as one important step in setting up a data set and argue that the so called 'hit-rate' criterion, that is intended to measure the accuracy of a data set by some distance function of 'true' but unobserved and imputed values, is neither required nor desirable. In contrast, the so-called 'inference' criterion allows for valid inferences based on a suitably completed data set under rather general conditions. The underlying theoretical concepts are illustrated by means of a simulation study. It is emphasised that the same principal arguments apply to other survey processes that introduce uncertainty into an edited data set.Survey Quality, Survey Processes, Accuracy, Assessment of Imputation Methods, Multiple Imputation
- …