18,416 research outputs found
A composition theorem for parity kill number
In this work, we study the parity complexity measures
and .
is the \emph{parity kill number} of , the
fewest number of parities on the input variables one has to fix in order to
"kill" , i.e. to make it constant. is the depth
of the shortest \emph{parity decision tree} which computes . These
complexity measures have in recent years become increasingly important in the
fields of communication complexity \cite{ZS09, MO09, ZS10, TWXZ13} and
pseudorandomness \cite{BK12, Sha11, CT13}.
Our main result is a composition theorem for .
The -th power of , denoted , is the function which results
from composing with itself times. We prove that if is not a parity
function, then In other words, the parity kill number of
is essentially supermultiplicative in the \emph{normal} kill number of
(also known as the minimum certificate complexity).
As an application of our composition theorem, we show lower bounds on the
parity complexity measures of and . Here is the sort function due to Ambainis \cite{Amb06},
and is Kushilevitz's hemi-icosahedron function \cite{NW95}. In
doing so, we disprove a conjecture of Montanaro and Osborne \cite{MO09} which
had applications to communication complexity and computational learning theory.
In addition, we give new lower bounds for conjectures of \cite{MO09,ZS10} and
\cite{TWXZ13}
Top-Down Induction of Decision Trees: Rigorous Guarantees and Inherent Limitations
Consider the following heuristic for building a decision tree for a function
. Place the most influential variable of
at the root, and recurse on the subfunctions and on the
left and right subtrees respectively; terminate once the tree is an
-approximation of . We analyze the quality of this heuristic,
obtaining near-matching upper and lower bounds:
Upper bound: For every with decision tree size and every
, this heuristic builds a decision tree of size
at most .
Lower bound: For every and , there is an with decision tree size such that
this heuristic builds a decision tree of size .
We also obtain upper and lower bounds for monotone functions:
and
respectively. The lower bound disproves conjectures of Fiat and Pechyony (2004)
and Lee (2009).
Our upper bounds yield new algorithms for properly learning decision trees
under the uniform distribution. We show that these algorithms---which are
motivated by widely employed and empirically successful top-down decision tree
learning heuristics such as ID3, C4.5, and CART---achieve provable guarantees
that compare favorably with those of the current fastest algorithm (Ehrenfeucht
and Haussler, 1989). Our lower bounds shed new light on the limitations of
these heuristics.
Finally, we revisit the classic work of Ehrenfeucht and Haussler. We extend
it to give the first uniform-distribution proper learning algorithm that
achieves polynomial sample and memory complexity, while matching its
state-of-the-art quasipolynomial runtime
Recommended from our members
On the Computational Power of Radio Channels
Radio networks can be a challenging platform for which to develop distributed algorithms, because the network nodes must contend for a shared channel. In some cases, though, the shared medium is an advantage rather than a disadvantage: for example, many radio network algorithms cleverly use the shared channel to approximate the degree of a node, or estimate the contention. In this paper we ask how far the inherent power of a shared radio channel goes, and whether it can efficiently compute "classicaly hard" functions such as Majority, Approximate Sum, and Parity.
Using techniques from circuit complexity, we show that in many cases, the answer is "no". We show that simple radio channels, such as the beeping model or the channel with collision-detection, can be approximated by a low-degree polynomial, which makes them subject to known lower bounds on functions such as Parity and Majority; we obtain round lower bounds of the form Omega(n^{delta}) on these functions, for delta in (0,1). Next, we use the technique of random restrictions, used to prove AC^0 lower bounds, to prove a tight lower bound of Omega(1/epsilon^2) on computing a (1 +/- epsilon)-approximation to the sum of the nodes\u27 inputs. Our techniques are general, and apply to many types of radio channels studied in the literature
On the parity complexity measures of Boolean functions
The parity decision tree model extends the decision tree model by allowing
the computation of a parity function in one step. We prove that the
deterministic parity decision tree complexity of any Boolean function is
polynomially related to the non-deterministic complexity of the function or its
complement. We also show that they are polynomially related to an analogue of
the block sensitivity. We further study parity decision trees in their
relations with an intermediate variant of the decision trees, as well as with
communication complexity.Comment: submitted to TCS on 16-MAR-200
Optimal Direct Sum Results for Deterministic and Randomized Decision Tree Complexity
A Direct Sum Theorem holds in a model of computation, when solving some k
input instances together is k times as expensive as solving one. We show that
Direct Sum Theorems hold in the models of deterministic and randomized decision
trees for all relations. We also note that a near optimal Direct Sum Theorem
holds for quantum decision trees for boolean functions.Comment: 7 page
- …