2 research outputs found

    New Bounds for Learning Intervals with Implications for Semi-Supervised Learning

    No full text
    We study learning of initial intervals in the prediction model. We show that for each distribution D over the domain, there is an algorithm AD, whose probability of a mistake in round m is at most ( 1 2 + o(1)) 1 m. We also show that the best possible bound that can be achieved in ( the case in which the same algorithm A must be applied for all distributions 1√e D is at least − o(1). Informally, “knowing ” the distribution D 1 m> (
    corecore