120 research outputs found
Separation Results for Boolean Function Classes
We show (almost) separation between certain important classes of Boolean functions. The technique that we use is to show that the total influence of functions in one class is less than the total influence of functions in the other class. In particular, we show (almost) separation of several classes of Boolean functions which have been studied in the coding theory and cryptography from classes which have been studied in combinatorics and complexity theory
Randomness Extraction in AC0 and with Small Locality
Randomness extractors, which extract high quality (almost-uniform) random
bits from biased random sources, are important objects both in theory and in
practice. While there have been significant progress in obtaining near optimal
constructions of randomness extractors in various settings, the computational
complexity of randomness extractors is still much less studied. In particular,
it is not clear whether randomness extractors with good parameters can be
computed in several interesting complexity classes that are much weaker than P.
In this paper we study randomness extractors in the following two models of
computation: (1) constant-depth circuits (AC0), and (2) the local computation
model. Previous work in these models, such as [Vio05a], [GVW15] and [BG13],
only achieve constructions with weak parameters. In this work we give explicit
constructions of randomness extractors with much better parameters. As an
application, we use our AC0 extractors to study pseudorandom generators in AC0,
and show that we can construct both cryptographic pseudorandom generators
(under reasonable computational assumptions) and unconditional pseudorandom
generators for space bounded computation with very good parameters.
Our constructions combine several previous techniques in randomness
extractors, as well as introduce new techniques to reduce or preserve the
complexity of extractors, which may be of independent interest. These include
(1) a general way to reduce the error of strong seeded extractors while
preserving the AC0 property and small locality, and (2) a seeded randomness
condenser with small locality.Comment: 62 page
Top-Down Induction of Decision Trees: Rigorous Guarantees and Inherent Limitations
Consider the following heuristic for building a decision tree for a function
. Place the most influential variable of
at the root, and recurse on the subfunctions and on the
left and right subtrees respectively; terminate once the tree is an
-approximation of . We analyze the quality of this heuristic,
obtaining near-matching upper and lower bounds:
Upper bound: For every with decision tree size and every
, this heuristic builds a decision tree of size
at most .
Lower bound: For every and , there is an with decision tree size such that
this heuristic builds a decision tree of size .
We also obtain upper and lower bounds for monotone functions:
and
respectively. The lower bound disproves conjectures of Fiat and Pechyony (2004)
and Lee (2009).
Our upper bounds yield new algorithms for properly learning decision trees
under the uniform distribution. We show that these algorithms---which are
motivated by widely employed and empirically successful top-down decision tree
learning heuristics such as ID3, C4.5, and CART---achieve provable guarantees
that compare favorably with those of the current fastest algorithm (Ehrenfeucht
and Haussler, 1989). Our lower bounds shed new light on the limitations of
these heuristics.
Finally, we revisit the classic work of Ehrenfeucht and Haussler. We extend
it to give the first uniform-distribution proper learning algorithm that
achieves polynomial sample and memory complexity, while matching its
state-of-the-art quasipolynomial runtime
Coin flipping from a cosmic source: On error correction of truly random bits
We study a problem related to coin flipping, coding theory, and noise
sensitivity. Consider a source of truly random bits x \in \bits^n, and
parties, who have noisy versions of the source bits y^i \in \bits^n, where
for all and , it holds that \Pr[y^i_j = x_j] = 1 - \eps, independently
for all and . That is, each party sees each bit correctly with
probability , and incorrectly (flipped) with probability
, independently for all bits and all parties. The parties, who cannot
communicate, wish to agree beforehand on {\em balanced} functions f_i :
\bits^n \to \bits such that is maximized. In
other words, each party wants to toss a fair coin so that the probability that
all parties have the same coin is maximized. The functions may be thought
of as an error correcting procedure for the source .
When no error correction is possible, as the optimal protocol is
given by . On the other hand, for large values of , better
protocols exist. We study general properties of the optimal protocols and the
asymptotic behavior of the problem with respect to , and \eps. Our
analysis uses tools from probability, discrete Fourier analysis, convexity and
discrete symmetrization
Algebraic Methods in Computational Complexity
Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test
are some of the most prominent examples. In some of the most exciting recent progress in Computational Complexity the algebraic theme still plays a central role. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Also the areas of derandomization and coding theory have experimented important advances. The seminar aimed to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and the goal of the seminar was to play an important role in educating a diverse community about the latest new techniques
The Cryptographic Hardness of Random Local Functions -- Survey
Constant parallel-time cryptography allows to perform complex cryptographic tasks at an ultimate level of parallelism, namely, by local functions
that each of their output bits depend on a constant number of input bits. A natural way to obtain local cryptographic constructions is to use \emph{random local functions} in which each output bit is computed by applying some fixed -ary predicate to a randomly chosen -size subset of the input bits.
In this work, we will study the cryptographic hardness of random local functions. In particular, we will survey known attacks and hardness results, discuss different flavors of hardness (one-wayness, pseudorandomness, collision resistance, public-key encryption), and mention applications to other problems in cryptography and computational complexity. We also present some open questions with the hope to develop a systematic study of the cryptographic hardness of local functions
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
High-Dimensional Function Approximation: Breaking the Curse with Monte Carlo Methods
In this dissertation we study the tractability of the information-based
complexity for -variate function approximation problems.
In the deterministic setting for many unweighted problems the curse of
dimensionality holds, that means, for some fixed error tolerance
the complexity grows exponentially in .
For integration problems one can usually break the curse with the standard
Monte Carlo method. For function approximation problems, however, similar
effects of randomization have been unknown so far.
The thesis contains results on three more or less stand-alone topics. For an
extended five page abstract, see the section "Introduction and Results".
Chapter 2 is concerned with lower bounds for the Monte Carlo error for
general linear problems via Bernstein numbers. This technique is applied to the
-approximation of certain classes of -functions, where
it turns out that randomization does not affect the tractability classification
of the problem.
Chapter 3 studies the -approximation of functions from Hilbert
spaces with methods that may use arbitrary linear functionals as information.
For certain classes of periodic functions from unweighted periodic tensor
product spaces, in particular Korobov spaces, we observe the curse of
dimensionality in the deterministic setting, while with randomized methods we
achieve polynomial tractability.
Chapter 4 deals with the -approximation of monotone functions via
function values. It is known that this problem suffers from the curse in the
deterministic setting. An improved lower bound shows that the problem is still
intractable in the randomized setting. However, Monte Carlo breaks the curse,
in detail, for any fixed error tolerance the complexity
grows exponentially in only.Comment: This is the author's submitted PhD thesis, still in the referee
proces
- …