387,504 research outputs found

    From average case complexity to improper learning complexity

    Full text link
    The basic problem in the PAC model of computational learning theory is to determine which hypothesis classes are efficiently learnable. There is presently a dearth of results showing hardness of learning problems. Moreover, the existing lower bounds fall short of the best known algorithms. The biggest challenge in proving complexity results is to establish hardness of {\em improper learning} (a.k.a. representation independent learning).The difficulty in proving lower bounds for improper learning is that the standard reductions from NP\mathbf{NP}-hard problems do not seem to apply in this context. There is essentially only one known approach to proving lower bounds on improper learning. It was initiated in (Kearns and Valiant 89) and relies on cryptographic assumptions. We introduce a new technique for proving hardness of improper learning, based on reductions from problems that are hard on average. We put forward a (fairly strong) generalization of Feige's assumption (Feige 02) about the complexity of refuting random constraint satisfaction problems. Combining this assumption with our new technique yields far reaching implications. In particular, 1. Learning DNF\mathrm{DNF}'s is hard. 2. Agnostically learning halfspaces with a constant approximation ratio is hard. 3. Learning an intersection of ω(1)\omega(1) halfspaces is hard.Comment: 34 page

    Learning Max-CSPs via Active Constraint Acquisition

    Get PDF
    Constraint acquisition can assist non-expert users to model their problems as constraint networks. In active constraint acquisition, this is achieved through an interaction between the learner, who posts examples, and the user who classifies them as solutions or not. Although there has been recent progress in active constraint acquisition, the focus has only been on learning satisfaction problems with hard constraints. In this paper, we deal with the problem of learning soft constraints in optimization problems via active constraint acquisition, specifically in the context of the Max-CSP. Towards this, we first introduce a new type of queries in the context of constraint acquisition, namely partial preference queries, and then we present a novel algorithm for learning soft constraints in Max-CSPs, using such queries. We also give some experimental results

    Bayesian Network Structure Learning with Permutation Tests

    Full text link
    In literature there are several studies on the performance of Bayesian network structure learning algorithms. The focus of these studies is almost always the heuristics the learning algorithms are based on, i.e. the maximisation algorithms (in score-based algorithms) or the techniques for learning the dependencies of each variable (in constraint-based algorithms). In this paper we investigate how the use of permutation tests instead of parametric ones affects the performance of Bayesian network structure learning from discrete data. Shrinkage tests are also covered to provide a broad overview of the techniques developed in current literature.Comment: 13 pages, 4 figures. Presented at the Conference 'Statistics for Complex Problems', Padova, June 15, 201
    • …
    corecore