6 research outputs found

    Active classification with comparison queries

    Full text link
    We study an extension of active learning in which the learning algorithm may ask the annotator to compare the distances of two examples from the boundary of their label-class. For example, in a recommendation system application (say for restaurants), the annotator may be asked whether she liked or disliked a specific restaurant (a label query); or which one of two restaurants did she like more (a comparison query). We focus on the class of half spaces, and show that under natural assumptions, such as large margin or bounded bit-description of the input examples, it is possible to reveal all the labels of a sample of size nn using approximately O(logn)O(\log n) queries. This implies an exponential improvement over classical active learning, where only label queries are allowed. We complement these results by showing that if any of these assumptions is removed then, in the worst case, Ω(n)\Omega(n) queries are required. Our results follow from a new general framework of active learning with additional queries. We identify a combinatorial dimension, called the \emph{inference dimension}, that captures the query complexity when each additional query is determined by O(1)O(1) examples (such as comparison queries, each of which is determined by the two compared examples). Our results for half spaces follow by bounding the inference dimension in the cases discussed above.Comment: 23 pages (not including references), 1 figure. The new version contains a minor fix in the proof of Lemma 4.

    Multiple Random Oracles Are Better Than One

    Get PDF
    We study the problem of learning k-juntas given access to examples drawn from a number of different product distributions. Thus we wish to learn a function f: {−1, 1}n → {−1, 1} that depends on k (unknown) coordinates. While the best-known algorithms for the general problem of learning a k-junta require running times of nk poly(n, 2k), we show that, given access to k different product distributions with biases separated by γ \u3e 0, the functions may be learned in time poly(n, 2k, γ−k). More generally, given access to t ≤ k different product distributions, the functions may be learned in time nk/tpoly(n, 2k, γ−k). Our techniques involve novel results in Fourier analysis, relating Fourier expansions with respect to different biases, and a generalization of Russo\u27s formula

    Artificial Intelligence in Hungary - The First 20 Years = Mesterséges intelligencia Magyarországon - az első 20 év

    Get PDF
    A magyarországi mesterséges intelligencia kutatások történetéről 1996-ban készített áttekintés 2006-ban korszerűsített változata, bőséges irodalom jegyzékkel
    corecore