8 research outputs found
Prediction-hardness of acyclic conjunctive queries
AbstractA conjunctive query problem is a problem to determine whether or not a tuple belongs to the answer of a conjunctive query over a database. In this paper, a tuple, a conjunctive query and a database in relational database theory are regarded as a ground atom, a nonrecursive function-free definite clause and a finite set of ground atoms, respectively, in inductive logic programming terminology. An acyclic conjunctive query problem is a conjunctive query problem with acyclicity. Concerned with the acyclic conjunctive query problem, in this paper, we present the hardness results of predicting acyclic conjunctive queries from an instance with a j-database of which predicate symbol is at most j-ary. Also we deal with two kinds of instances, a simple instance as a set of ground atoms and an extended instance as a set of pairs of a ground atom and a description. We mainly show that, from both a simple and an extended instances, acyclic conjunctive queries are not polynomial-time predictable with j-databases (j⩾3) under the cryptographic assumptions, and predicting acyclic conjunctive queries with 2-databases is as hard as predicting DNF formulas. Hence, the acyclic conjunctive queries become a natural example that the equivalence between subsumption-efficiency and efficient pac-learnability from both a simple and an extended instances collapses
Pac-Learning Recursive Logic Programs: Efficient Algorithms
We present algorithms that learn certain classes of function-free recursive
logic programs in polynomial time from equivalence queries. In particular, we
show that a single k-ary recursive constant-depth determinate clause is
learnable. Two-clause programs consisting of one learnable recursive clause and
one constant-depth determinate non-recursive clause are also learnable, if an
additional ``basecase'' oracle is assumed. These results immediately imply the
pac-learnability of these classes. Although these classes of learnable
recursive programs are very constrained, it is shown in a companion paper that
they are maximally general, in that generalizing either class in any natural
way leads to a computationally difficult learning problem. Thus, taken together
with its companion paper, this paper establishes a boundary of efficient
learnability for recursive logic programs.Comment: See http://www.jair.org/ for any accompanying file
Conjunctive Queries: Unique Characterizations and Exact Learnability
We answer the question of which conjunctive queries are uniquely characterized by polynomially many positive and negative examples, and how to construct such examples efficiently. As a consequence, we obtain a new efficient exact learning algorithm for a class of conjunctive queries. At the core of our contributions lie two new polynomial-time algorithms for constructing frontiers in the homomorphism lattice of finite structures. We also discuss implications for the unique characterizability and learnability of schema mappings and of description logic concepts
Pac-learning Recursive Logic Programs: Negative Results
In a companion paper it was shown that the class of constant-depth
determinate k-ary recursive clauses is efficiently learnable. In this paper we
present negative results showing that any natural generalization of this class
is hard to learn in Valiant's model of pac-learnability. In particular, we show
that the following program classes are cryptographically hard to learn:
programs with an unbounded number of constant-depth linear recursive clauses;
programs with one constant-depth determinate clause containing an unbounded
number of recursive calls; and programs with one linear recursive clause of
constant locality. These results immediately imply the non-learnability of any
more general class of programs. We also show that learning a constant-depth
determinate program with either two linear recursive clauses or one linear
recursive clause and one non-recursive clause is as hard as learning boolean
DNF. Together with positive results from the companion paper, these negative
results establish a boundary of efficient learnability for recursive
function-free clauses.Comment: See http://www.jair.org/ for any accompanying file
Pac-Learning Non-Recursive Prolog Clauses
AbstractRecently there has been an increasing amount of research on learning concepts expressed in subsets of Prolog; the term inductive logic programming (ILP) has been used to describe this growing body of research. This paper seeks to expand the theoretical foundations of ILP by investigating the pac-learnability of logic programs. We focus on programs consisting of a single function-free non-recursive clause, and focus on generalizations of a language known to be pac-learnable: namely, the language of determinate function-free clauses of constant depth. We demonstrate that a number of syntactic generalizations of this language are hard to learn, but that the language can be generalized to clauses of constant locality while still allowing pac-learnability. More specifically, we first show that determinate clauses of log depth are not pac-learnable, regardless of the language used to represent hypotheses. We then investigate the effect of allowing indeterminacy in a clause, and show that clauses with k indeterminate variables are as hard to learn as DNF. We next show that a more restricted language of clauses with bounded indeterminacy is learnable using k-CNF to represent hypotheses, and that restricting the “locality” of a clause to a constant allows pac-learnability even if an arbitrary amount of indeterminacy is allowed. This last result is also shown to be a strict generalization of the previous result for determinate function-free clauses of constant depth. Finally, we present some extensions of these results to logic programs with multiple clauses
Recommended from our members
Learning hierarchical decomposition rules for planning : an inductive logic programming approach
Artificial Intelligence (AI) planning techniques have been central to automating a gamut of tasks from the mundane route planning and beer production to the ethereal image processing of space-ship images. Of all the planning techniques, hierarchical- decomposition planning has been the technique most employed in industrial-strength planners. Hierarchical-decomposition planning is performed by recursively decomposing a planning task into its subtasks, until the decomposition results in primitive tasks which can be directly achieved by executing the primitive actions. Hierarchical-decomposition planning is knowledge intensive; it exploits knowl- edge of the structure and the constraints of a planning domain, to decompose a task into subtasks. Because dependence on human experts for this knowledge leads to knowledge-acquisition bottleneck, machine learning of this domain-specific knowledge becomes important. There exist two opportunities for learning in the context of hierarchical-decomposition planning. One is to learn how a planning task de- composes into subtasks. The other is to learn control knowledge to choose among various decompositions for a task, depending upon situations. In this dissertation,the focus is on the former; more specifically, we focus on learning rules for task or goal decompositions. Goal-decomposition rules (d-rules) decompose goals into a sequence of subgoals under certain conditions. These are a special case of hierarchical task networks (HTNs). The methodology we used for learning d-rules is to map d-rules to Horn clauses, and, thus, transform the problem of learning d-rules to learning Horn clauses. We developed provably correct algorithms for learning Horn clauses. Our algorithms are based on a generalize-and-test method, where inductive least-general generalization of positive examples is followed by pruning of irrelevant literals by asking queries or performing self-testing. We implemented systems that are founded in the theoretical algorithms, and tested the applicability of the systems in two planning domains|a robot navigation domain and an air-tra c control domain. One of these systems, ExEL, learned from solved problems and expert-answered queries. The other, LeXer, learned from unsolved but ordered problems, or exercises, and self- testing. The applicability of the theoretical algorithms developed for learning Horn clauses, however, transcends the learning of d-rules and even the learning of the more general HTNs.Keywords: Decomposition method, Rule-based programming, Machine learnin