4,708 research outputs found

    A lower bound on the quantum query complexity of read-once functions

    Get PDF
    We establish a lower bound of Ω(n)\Omega{(\sqrt{n})} on the bounded-error quantum query complexity of read-once Boolean functions, providing evidence for the conjecture that Ω(D(f))\Omega(\sqrt{D(f)}) is a lower bound for all Boolean functions. Our technique extends a result of Ambainis, based on the idea that successful computation of a function requires ``decoherence'' of initially coherently superposed inputs in the query register, having different values of the function. The number of queries is bounded by comparing the required total amount of decoherence of a judiciously selected set of input-output pairs to an upper bound on the amount achievable in a single query step. We use an extension of this result to general weights on input pairs, and general superpositions of inputs.Comment: 12 pages, LaTe

    A Polynomial Time Algorithm for Lossy Population Recovery

    Full text link
    We give a polynomial time algorithm for the lossy population recovery problem. In this problem, the goal is to approximately learn an unknown distribution on binary strings of length nn from lossy samples: for some parameter ÎŒ\mu each coordinate of the sample is preserved with probability ÎŒ\mu and otherwise is replaced by a `?'. The running time and number of samples needed for our algorithm is polynomial in nn and 1/Δ1/\varepsilon for each fixed ÎŒ>0\mu>0. This improves on algorithm of Wigderson and Yehudayoff that runs in quasi-polynomial time for any ÎŒ>0\mu > 0 and the polynomial time algorithm of Dvir et al which was shown to work for ÎŒâȘ†0.30\mu \gtrapprox 0.30 by Batman et al. In fact, our algorithm also works in the more general framework of Batman et al. in which there is no a priori bound on the size of the support of the distribution. The algorithm we analyze is implicit in previous work; our main contribution is to analyze the algorithm by showing (via linear programming duality and connections to complex analysis) that a certain matrix associated with the problem has a robust local inverse even though its condition number is exponentially small. A corollary of our result is the first polynomial time algorithm for learning DNFs in the restriction access model of Dvir et al

    Poverty in Belgium

    Get PDF
    The EU Statistics on Income and Living Conditions (SILC) surveys gives a harmonised source of data making it possible to get a good idea of inequality and poverty, at both the Belgian and European levels. Disposable income distribution appears to be slightly more egalitarian in Belgium than the EU15 average, and around 15 p.c. of the population lives below the poverty line in our country, compared with 16 p.c. in the EU15 as a whole Poverty can be defined in many different ways. The rate of monetary poverty corresponds to the percentage of the population with an income below the poverty line. The European Union has conventionally set this threshold at 60 p.c. of the median income. Other approaches (such as that based on material deprivation and the subjective approach, subjective in the sense that it relies on the personal perception of the people being surveyed) contribute to a better understanding of the true nature of poverty but they are not a perfect match. The perceived rate of poverty is thus higher in Belgium and France than the poverty rate based on relative income, whereas the reverse is true in the United Kingdom. The monetary poverty indicators calculated on the basis of the SILC surveys are given preference in this article, even though they are not immune to problems. In particular, disposable income as calculated from the SILC surveys does not take account of several components, including the imputed rent for households that own their home. For households with members of working age, employment offers good protection against poverty, provided a high enough number of hours are worked at an adequate wage level. In Belgium, the minimum wage tends to limit the number of working poor. So, households with a full 100 p.c. work intensity rate in our country enjoy the lowest poverty rate in the EU15, regardless of whether or not they have children in the home. Single parents make up the category of households at the highest risk of poverty. The proportion of retirees living below the poverty line is also higher than that among the population of working age. The situation as regards the elderly nevertheless needs to be put into perspective because proportionally more of these people own their home than among the rest of the population. Education is a key factor for employment. A high level of education goes hand in hand with a lower likelihood of both falling into poverty and remaining poor for long periods of time. Ensuring access to quality education for all is thus essential for promoting equal opportunities. Longitudinal data show that, at any given moment, a large number of people are falling into or getting out of poverty. By comparison with other European countries, Belgium has a very low poverty entry rate, but it also has a fairly low poverty exit rate.poverty, SILC, Belgium, EU

    Nondeterministic graph property testing

    Full text link
    A property of finite graphs is called nondeterministically testable if it has a "certificate" such that once the certificate is specified, its correctness can be verified by random local testing. In this paper we study certificates that consist of one or more unary and/or binary relations on the nodes, in the case of dense graphs. Using the theory of graph limits, we prove that nondeterministically testable properties are also deterministically testable.Comment: Version 2: 11 pages; we allow orientation in the certificate, describe new application

    Clustering is difficult only when it does not matter

    Full text link
    Numerous papers ask how difficult it is to cluster data. We suggest that the more relevant and interesting question is how difficult it is to cluster data sets {\em that can be clustered well}. More generally, despite the ubiquity and the great importance of clustering, we still do not have a satisfactory mathematical theory of clustering. In order to properly understand clustering, it is clearly necessary to develop a solid theoretical basis for the area. For example, from the perspective of computational complexity theory the clustering problem seems very hard. Numerous papers introduce various criteria and numerical measures to quantify the quality of a given clustering. The resulting conclusions are pessimistic, since it is computationally difficult to find an optimal clustering of a given data set, if we go by any of these popular criteria. In contrast, the practitioners' perspective is much more optimistic. Our explanation for this disparity of opinions is that complexity theory concentrates on the worst case, whereas in reality we only care for data sets that can be clustered well. We introduce a theoretical framework of clustering in metric spaces that revolves around a notion of "good clustering". We show that if a good clustering exists, then in many cases it can be efficiently found. Our conclusion is that contrary to popular belief, clustering should not be considered a hard task

    Noisy population recovery in polynomial time

    Full text link
    In the noisy population recovery problem of Dvir et al., the goal is to learn an unknown distribution ff on binary strings of length nn from noisy samples. For some parameter Ό∈[0,1]\mu \in [0,1], a noisy sample is generated by flipping each coordinate of a sample from ff independently with probability (1−Ό)/2(1-\mu)/2. We assume an upper bound kk on the size of the support of the distribution, and the goal is to estimate the probability of any string to within some given error Δ\varepsilon. It is known that the algorithmic complexity and sample complexity of this problem are polynomially related to each other. We show that for ÎŒ>0\mu > 0, the sample complexity (and hence the algorithmic complexity) is bounded by a polynomial in kk, nn and 1/Δ1/\varepsilon improving upon the previous best result of poly(klog⁥log⁥k,n,1/Δ)\mathsf{poly}(k^{\log\log k},n,1/\varepsilon) due to Lovett and Zhang. Our proof combines ideas from Lovett and Zhang with a \emph{noise attenuated} version of M\"{o}bius inversion. In turn, the latter crucially uses the construction of \emph{robust local inverse} due to Moitra and Saks
    • 

    corecore