277 research outputs found
Prediction-hardness of acyclic conjunctive queries
AbstractA conjunctive query problem is a problem to determine whether or not a tuple belongs to the answer of a conjunctive query over a database. In this paper, a tuple, a conjunctive query and a database in relational database theory are regarded as a ground atom, a nonrecursive function-free definite clause and a finite set of ground atoms, respectively, in inductive logic programming terminology. An acyclic conjunctive query problem is a conjunctive query problem with acyclicity. Concerned with the acyclic conjunctive query problem, in this paper, we present the hardness results of predicting acyclic conjunctive queries from an instance with a j-database of which predicate symbol is at most j-ary. Also we deal with two kinds of instances, a simple instance as a set of ground atoms and an extended instance as a set of pairs of a ground atom and a description. We mainly show that, from both a simple and an extended instances, acyclic conjunctive queries are not polynomial-time predictable with j-databases (j⩾3) under the cryptographic assumptions, and predicting acyclic conjunctive queries with 2-databases is as hard as predicting DNF formulas. Hence, the acyclic conjunctive queries become a natural example that the equivalence between subsumption-efficiency and efficient pac-learnability from both a simple and an extended instances collapses
Parameter Learning of Logic Programs for Symbolic-Statistical Modeling
We propose a logical/mathematical framework for statistical parameter
learning of parameterized logic programs, i.e. definite clause programs
containing probabilistic facts with a parameterized distribution. It extends
the traditional least Herbrand model semantics in logic programming to
distribution semantics, possible world semantics with a probability
distribution which is unconditionally applicable to arbitrary logic programs
including ones for HMMs, PCFGs and Bayesian networks. We also propose a new EM
algorithm, the graphical EM algorithm, that runs for a class of parameterized
logic programs representing sequential decision processes where each decision
is exclusive and independent. It runs on a new data structure called support
graphs describing the logical relationship between observations and their
explanations, and learns parameters by computing inside and outside probability
generalized for logic programs. The complexity analysis shows that when
combined with OLDT search for all explanations for observations, the graphical
EM algorithm, despite its generality, has the same time complexity as existing
EM algorithms, i.e. the Baum-Welch algorithm for HMMs, the Inside-Outside
algorithm for PCFGs, and the one for singly connected Bayesian networks that
have been developed independently in each research field. Learning experiments
with PCFGs using two corpora of moderate size indicate that the graphical EM
algorithm can significantly outperform the Inside-Outside algorithm
A Knowledge Compilation Map
We propose a perspective on knowledge compilation which calls for analyzing
different compilation approaches according to two key dimensions: the
succinctness of the target compilation language, and the class of queries and
transformations that the language supports in polytime. We then provide a
knowledge compilation map, which analyzes a large number of existing target
compilation languages according to their succinctness and their polytime
transformations and queries. We argue that such analysis is necessary for
placing new compilation approaches within the context of existing ones. We also
go beyond classical, flat target compilation languages based on CNF and DNF,
and consider a richer, nested class based on directed acyclic graphs (such as
OBDDs), which we show to include a relatively large number of target
compilation languages
Ontology-Mediated Query Answering over Log-Linear Probabilistic Data: Extended Version
Large-scale knowledge bases are at the heart of modern information systems. Their knowledge is inherently uncertain, and hence they are often materialized as probabilistic databases. However, probabilistic database management systems typically lack the capability to incorporate implicit background knowledge and, consequently, fail to capture some intuitive query answers. Ontology-mediated query answering is a popular paradigm for encoding commonsense knowledge, which can provide more complete answers to user queries. We propose a new data model that integrates the paradigm of ontology-mediated query answering with probabilistic databases, employing a log-linear probability model. We compare our approach to existing proposals, and provide supporting computational results
Recommended from our members
Learning hierarchical decomposition rules for planning : an inductive logic programming approach
Artificial Intelligence (AI) planning techniques have been central to automating a gamut of tasks from the mundane route planning and beer production to the ethereal image processing of space-ship images. Of all the planning techniques, hierarchical- decomposition planning has been the technique most employed in industrial-strength planners. Hierarchical-decomposition planning is performed by recursively decomposing a planning task into its subtasks, until the decomposition results in primitive tasks which can be directly achieved by executing the primitive actions. Hierarchical-decomposition planning is knowledge intensive; it exploits knowl- edge of the structure and the constraints of a planning domain, to decompose a task into subtasks. Because dependence on human experts for this knowledge leads to knowledge-acquisition bottleneck, machine learning of this domain-specific knowledge becomes important. There exist two opportunities for learning in the context of hierarchical-decomposition planning. One is to learn how a planning task de- composes into subtasks. The other is to learn control knowledge to choose among various decompositions for a task, depending upon situations. In this dissertation,the focus is on the former; more specifically, we focus on learning rules for task or goal decompositions. Goal-decomposition rules (d-rules) decompose goals into a sequence of subgoals under certain conditions. These are a special case of hierarchical task networks (HTNs). The methodology we used for learning d-rules is to map d-rules to Horn clauses, and, thus, transform the problem of learning d-rules to learning Horn clauses. We developed provably correct algorithms for learning Horn clauses. Our algorithms are based on a generalize-and-test method, where inductive least-general generalization of positive examples is followed by pruning of irrelevant literals by asking queries or performing self-testing. We implemented systems that are founded in the theoretical algorithms, and tested the applicability of the systems in two planning domains|a robot navigation domain and an air-tra c control domain. One of these systems, ExEL, learned from solved problems and expert-answered queries. The other, LeXer, learned from unsolved but ordered problems, or exercises, and self- testing. The applicability of the theoretical algorithms developed for learning Horn clauses, however, transcends the learning of d-rules and even the learning of the more general HTNs.Keywords: Decomposition method, Rule-based programming, Machine learnin
- …