4 research outputs found

    New bounds on classical and quantum one-way communication complexity

    Get PDF
    In this paper we provide new bounds on classical and quantum distributional communication complexity in the two-party, one-way model of communication. In the classical model, our bound extends the well known upper bound of Kremer, Nisan and Ron to include non-product distributions. We show that for a boolean function f:X x Y -> {0,1} and a non-product distribution mu on X x Y and epsilon in (0,1/2) constant: D_{epsilon}^{1, mu}(f)= O((I(X:Y)+1) vc(f)), where D_{epsilon}^{1, mu}(f) represents the one-way distributional communication complexity of f with error at most epsilon under mu; vc(f) represents the Vapnik-Chervonenkis dimension of f and I(X:Y) represents the mutual information, under mu, between the random inputs of the two parties. For a non-boolean function f:X x Y ->[k], we show a similar upper bound on D_{epsilon}^{1, mu}(f) in terms of k, I(X:Y) and the pseudo-dimension of f' = f/k. In the quantum one-way model we provide a lower bound on the distributional communication complexity, under product distributions, of a function f, in terms the well studied complexity measure of f referred to as the rectangle bound or the corruption bound of f . We show for a non-boolean total function f : X x Y -> Z and a product distribution mu on XxY, Q_{epsilon^3/8}^{1, mu}(f) = Omega(rec_ epsilon^{1, mu}(f)), where Q_{epsilon^3/8}^{1, mu}(f) represents the quantum one-way distributional communication complexity of f with error at most epsilon^3/8 under mu and rec_ epsilon^{1, mu}(f) represents the one-way rectangle bound of f with error at most epsilon under mu . Similarly for a non-boolean partial function f:XxY -> Z U {*} and a product distribution mu on X x Y, we show, Q_{epsilon^6/(2 x 15^4)}^{1, mu}(f) = Omega(rec_ epsilon^{1, mu}(f)).Comment: ver 1, 19 page

    Optimal Quantum Sample Complexity of Learning Algorithms

    Get PDF
    \newcommand{\eps}{\varepsilon} In learning theory, the VC dimension of a concept class CC is the most common way to measure its "richness." In the PAC model \Theta\Big(\frac{d}{\eps} + \frac{\log(1/\delta)}{\eps}\Big) examples are necessary and sufficient for a learner to output, with probability 1δ1-\delta, a hypothesis hh that is \eps-close to the target concept cc. In the related agnostic model, where the samples need not come from a cCc\in C, we know that \Theta\Big(\frac{d}{\eps^2} + \frac{\log(1/\delta)}{\eps^2}\Big) examples are necessary and sufficient to output an hypothesis hCh\in C whose error is at most \eps worse than the best concept in CC. Here we analyze quantum sample complexity, where each example is a coherent quantum state. This model was introduced by Bshouty and Jackson, who showed that quantum examples are more powerful than classical examples in some fixed-distribution settings. However, Atici and Servedio, improved by Zhang, showed that in the PAC setting, quantum examples cannot be much more powerful: the required number of quantum examples is \Omega\Big(\frac{d^{1-\eta}}{\eps} + d + \frac{\log(1/\delta)}{\eps}\Big)\mbox{ for all }\eta> 0. Our main result is that quantum and classical sample complexity are in fact equal up to constant factors in both the PAC and agnostic models. We give two approaches. The first is a fairly simple information-theoretic argument that yields the above two classical bounds and yields the same bounds for quantum sample complexity up to a \log(d/\eps) factor. We then give a second approach that avoids the log-factor loss, based on analyzing the behavior of the "Pretty Good Measurement" on the quantum state identification problems that correspond to learning. This shows classical and quantum sample complexity are equal up to constant factors.Comment: 31 pages LaTeX. Arxiv abstract shortened to fit in their 1920-character limit. Version 3: many small changes, no change in result

    Correlation in Hard Distributions in Communication Complexity

    Get PDF
    We study the effect that the amount of correlation in a bipartite distribution has on the communication complexity of a problem under that distribution. We introduce a new family of complexity measures that interpolates between the two previously studied extreme cases: the (standard) randomised communication complexity and the case of distributional complexity under product distributions. We give a tight characterisation of the randomised complexity of Disjointness under distributions with mutual information kk, showing that it is Θ(n(k+1))\Theta(\sqrt{n(k+1)}) for all 0kn0\leq k\leq n. This smoothly interpolates between the lower bounds of Babai, Frankl and Simon for the product distribution case (k=0k=0), and the bound of Razborov for the randomised case. The upper bounds improve and generalise what was known for product distributions, and imply that any tight bound for Disjointness needs Ω(n)\Omega(n) bits of mutual information in the corresponding distribution. We study the same question in the distributional quantum setting, and show a lower bound of Ω((n(k+1))1/4)\Omega((n(k+1))^{1/4}), and an upper bound, matching up to a logarithmic factor. We show that there are total Boolean functions fdf_d on 2n2n inputs that have distributional communication complexity O(logn)O(\log n) under all distributions of information up to o(n)o(n), while the (interactive) distributional complexity maximised over all distributions is Θ(logd)\Theta(\log d) for 6nd2n/1006n\leq d\leq 2^{n/100}. We show that in the setting of one-way communication under product distributions, the dependence of communication cost on the allowed error ϵ\epsilon is multiplicative in log(1/ϵ)\log(1/\epsilon) -- the previous upper bounds had the dependence of more than 1/ϵ1/\epsilon

    Optimal quantum sample complexity of learning algorithms

    Get PDF
    In learning theory, the VC dimension of a concept class C is the most common way to measure its “richness.” A fundamental result says that the number of examples needed to learn an unknown target concept c∈C under an unknown distribution D, is tightly determined by the VC dimension d of the concept class C. Specifically, in the PAC model Θ(dϵ+log(1/δ)ϵ) examples are necessary and sufficient for a learner to output, with probability 1−δ, a hypothesis h that is ϵ-close to the target concept c (measured under D). In the related agnostic model, where the samples need not come from a c∈C, we know that Θ(dϵ2+log(1/δ)ϵ2) examples are necessary and sufficient to output an hypothesis h∈C whose error is at most ϵ worse than the error of the best concept in C. Here we analyze quantum sample complexity, where each example is a coherent quantum state. This model was introduced by Bshouty and Jackson (1999), who showed that quantum examples are more powerful than classical examples in some fixed-distribution settings. However, Atıcı and Servedio (2005), improved by Zhang (2010), showed that in the PAC setting (where the learner has to succeed for every distribution), quantum examples cannot be much more powerful: the required number of quantum examples is Ω(d1−ηϵ+d+log(1/δ)ϵ) for arbitrarily small constant η>0. Our main result is that quantum and classical sample complexity are in fact equal up to constant factors in both the PAC and agnostic models. We give two proof approaches. The first is a fairly simple information-theoretic argument that yields the above two classical bounds and yields the same bounds for quantum sample complexity up to a log(d/ϵ) factor. We then give a second approach that avoids the log-factor loss, based on analyzing the behavior of the “Pretty Good Measurement” on the quantum state-identification problems that correspond to learning. This shows classical and quantum sample complexity are equal up to constant factors for every concept class C
    corecore