79 research outputs found

    Pac-Learning Recursive Logic Programs: Efficient Algorithms

    Full text link
    We present algorithms that learn certain classes of function-free recursive logic programs in polynomial time from equivalence queries. In particular, we show that a single k-ary recursive constant-depth determinate clause is learnable. Two-clause programs consisting of one learnable recursive clause and one constant-depth determinate non-recursive clause are also learnable, if an additional ``basecase'' oracle is assumed. These results immediately imply the pac-learnability of these classes. Although these classes of learnable recursive programs are very constrained, it is shown in a companion paper that they are maximally general, in that generalizing either class in any natural way leads to a computationally difficult learning problem. Thus, taken together with its companion paper, this paper establishes a boundary of efficient learnability for recursive logic programs.Comment: See http://www.jair.org/ for any accompanying file

    General Routing Algorithms for Star Graphs

    Get PDF
    In designing algorithms for a specific parallel architecture, a programmer has to cope with topological and cardinality variations. Both these problems always increase the programmer\u27s effort. However, an ideal shared memory abstract parallel model called the parallel random access machine (PRAM) [KRUS86, KRUS88] that avoids these problems and also simple-to-program has been proposed. Unfortunately, the PRAM does not seem to be realizable in the present or even foreseeable technologies. On the other hand, a packet routing technique can be employed to simulate the PRAM on a feasible parallel architecture without significant loss of efficiency. The problem of routing is also important due to its intrinsic significance in distributed processing and its important role in the simulations among parallel models. The routing problem is defined as follows: Given a specific network and a set of packets of information in which a packet is an (origin, destination) pair. To start with, the packets are placed on their origins, one per node. These packets must be routed in parallel to their own destinations such that at most one packet passes through any link of the network at any time and all packets arrive at their destinations as quickly as possible. We are interested in a special case of the general routing problem called permutation routing in which the destinations form some permutation of the origins. A routing algorithm is said to be oblivious if the path taken by each packet is only dependent on its source and destination. An oblivious routing strategy is preferable since it will lead to a simple control structure for the individual processing elements. Also oblivious routing algorithms can be used in a distributed environment. In this paper we are concerned with only oblivious routing strategies

    Polynomial Learnability of Semilinear Sets

    Get PDF
    We characterize learnability and non-learnability of subsets of Nm called \u27semilinear sets\u27, with respect to the distribution-free learning model of Valiant. In formal language terms, semilinear sets are exactly the class of \u27letter-counts\u27 (or Parikh-images) of regular sets. We show that the class of semilinear sets of dimensions 1 and 2 is learnable, when the integers are encoded in unary. We complement this result with negative results of several different sorts, relying on hardness assumptions of varying degrees - from P ≠ NP and RP ≠ NP to the hardness of learning DNF. We show that the minimal consistent concept problem is NP-complete for this class, verifying the non-triviality of our learnability result. We also show that with respect to the binary encoding of integers, the corresponding \u27prediction\u27 problem is already as hard as that of DNF, for a class of subsets of Nm much simpler than semilinear sets. The present work represents an interesting class of countably infinite concepts for which the questions of learnability have been nearly completely characterized. In doing so, we demonstrate how various proof techniques developed by Pitt and Valiant [14], Blumer et al. [3], and Pitt and Warmuth [16] can be fruitfully applied in the context of formal languages

    On metric entropy, Vapnik-Chervonenkis dimension, and learnability for a class of distributions

    Get PDF
    Cover title.Includes bibliographical references (p. 13-14).Research supported by the U.S. Army Research Office. DAAL03-86-K-0171 Research supported by the Department of the Navy for SDIO.Sanjeev R. Kulkarni

    Complexity Results on Learning by Neural Nets

    Get PDF
    We consider the computational complexity of learning by neural nets. We are inter- ested in how hard it is to design appropriate neural net architectures and to train neural nets for general and specialized learning tasks. Our main result shows that the training problem for 2-cascade neural nets (which have only two non-input nodes, one of which is hidden) is NP-complete, which implies that nding an optimal net (in terms of the number of non-input units) that is consistent with a set of exam- ples is also NP-complete. This result also demonstrates a surprising gap between the computational complexities of one-node (perceptron) and two-node neural net training problems, since the perceptron training problem can be solved in polynomial time by linear programming techniques. We conjecture that training a k-cascade neural net, which is a classical threshold network training problem, is also NP-complete, for each xed k 2. We also show that the problem of nding an optimal perceptron (in terms of the number of non-zero weights) consistent with a set of training examples is NP-hard. Our neural net learning model encapsulates the idea of modular neural nets, which is a popular approach to overcoming the scaling problem in training neural nets. We investigate how much easier the training problem becomes if the class of concepts to be learned is known a priori and the net architecture is allowed to be su ciently non-optimal. Finally, we classify several neural net optimization problems within the polynomial-time hierarchy

    Learning DFA for Simple Examples

    Get PDF
    We present a framework for learning DFA from simple examples. We show that efficient PAC learning of DFA is possible if the class of distributions is restricted to simple distributions where a teacher might choose examples based on the knowledge of the target concept. This answers an open research question posed in Pitt\u27s seminal paper: Are DFA\u27s PAC-identifiable if examples are drawn from the uniform distribution, or some other known simple distribution? Our approach uses the RPNI algorithm for learning DFA from labeled examples. In particular, we describe an efficient learning algorithm for exact learning of the target DFA with high probability when a bound on the number of states (N) of the target DFA is known in advance. When N is not known, we show how this algorithm can be used for efficient PAC learning of DFAs

    Learning in Parallel

    Get PDF
    In this paper, we extend Valiant's sequential model of concept learning from examples [Valiant 1984] and introduce models for the e cient learning of concept classes from examples in parallel. We say that a concept class is NC-learnable if it can be learned in polylog time with a polynomial number of processors. We show that several concept classes which are polynomial-time learnable are NC-learnable in constant time. Some other classes can be shown to be NC-learnable in logarithmic time, but not in constant time. Our main result shows that other classes, such as s-fold unions of geometrical objects in Euclidean space, which are polynomial-time learnable by a greedy set cover technique, are NC-learnable using a non-greedy technique. We also show that (unless P RNC) several polynomial-time learnable concept classes related to linear programming are not NC-learnable. Equivalence of various parallel learning models and issues of fault-tolerance are also discussed
    • …
    corecore