359 research outputs found
A lower bound on the quantum query complexity of read-once functions
We establish a lower bound of on the bounded-error
quantum query complexity of read-once Boolean functions, providing evidence for
the conjecture that is a lower bound for all Boolean
functions. Our technique extends a result of Ambainis, based on the idea that
successful computation of a function requires ``decoherence'' of initially
coherently superposed inputs in the query register, having different values of
the function. The number of queries is bounded by comparing the required total
amount of decoherence of a judiciously selected set of input-output pairs to an
upper bound on the amount achievable in a single query step. We use an
extension of this result to general weights on input pairs, and general
superpositions of inputs.Comment: 12 pages, LaTe
A Polynomial Time Algorithm for Lossy Population Recovery
We give a polynomial time algorithm for the lossy population recovery
problem. In this problem, the goal is to approximately learn an unknown
distribution on binary strings of length from lossy samples: for some
parameter each coordinate of the sample is preserved with probability
and otherwise is replaced by a `?'. The running time and number of
samples needed for our algorithm is polynomial in and for
each fixed . This improves on algorithm of Wigderson and Yehudayoff that
runs in quasi-polynomial time for any and the polynomial time
algorithm of Dvir et al which was shown to work for by
Batman et al. In fact, our algorithm also works in the more general framework
of Batman et al. in which there is no a priori bound on the size of the support
of the distribution. The algorithm we analyze is implicit in previous work; our
main contribution is to analyze the algorithm by showing (via linear
programming duality and connections to complex analysis) that a certain matrix
associated with the problem has a robust local inverse even though its
condition number is exponentially small. A corollary of our result is the first
polynomial time algorithm for learning DNFs in the restriction access model of
Dvir et al
Clustering is difficult only when it does not matter
Numerous papers ask how difficult it is to cluster data. We suggest that the
more relevant and interesting question is how difficult it is to cluster data
sets {\em that can be clustered well}. More generally, despite the ubiquity and
the great importance of clustering, we still do not have a satisfactory
mathematical theory of clustering. In order to properly understand clustering,
it is clearly necessary to develop a solid theoretical basis for the area. For
example, from the perspective of computational complexity theory the clustering
problem seems very hard. Numerous papers introduce various criteria and
numerical measures to quantify the quality of a given clustering. The resulting
conclusions are pessimistic, since it is computationally difficult to find an
optimal clustering of a given data set, if we go by any of these popular
criteria. In contrast, the practitioners' perspective is much more optimistic.
Our explanation for this disparity of opinions is that complexity theory
concentrates on the worst case, whereas in reality we only care for data sets
that can be clustered well.
We introduce a theoretical framework of clustering in metric spaces that
revolves around a notion of "good clustering". We show that if a good
clustering exists, then in many cases it can be efficiently found. Our
conclusion is that contrary to popular belief, clustering should not be
considered a hard task
Noisy population recovery in polynomial time
In the noisy population recovery problem of Dvir et al., the goal is to learn
an unknown distribution on binary strings of length from noisy samples.
For some parameter , a noisy sample is generated by flipping
each coordinate of a sample from independently with probability
. We assume an upper bound on the size of the support of the
distribution, and the goal is to estimate the probability of any string to
within some given error . It is known that the algorithmic
complexity and sample complexity of this problem are polynomially related to
each other.
We show that for , the sample complexity (and hence the algorithmic
complexity) is bounded by a polynomial in , and
improving upon the previous best result of due to Lovett and Zhang.
Our proof combines ideas from Lovett and Zhang with a \emph{noise attenuated}
version of M\"{o}bius inversion. In turn, the latter crucially uses the
construction of \emph{robust local inverse} due to Moitra and Saks
- …