5,089 research outputs found
Specific-to-General Learning for Temporal Events with Application to Learning Event Definitions from Video
We develop, analyze, and evaluate a novel, supervised, specific-to-general
learner for a simple temporal logic and use the resulting algorithm to learn
visual event definitions from video sequences. First, we introduce a simple,
propositional, temporal, event-description language called AMA that is
sufficiently expressive to represent many events yet sufficiently restrictive
to support learning. We then give algorithms, along with lower and upper
complexity bounds, for the subsumption and generalization problems for AMA
formulas. We present a positive-examples--only specific-to-general learning
method based on these algorithms. We also present a polynomial-time--computable
``syntactic'' subsumption test that implies semantic subsumption without being
equivalent to it. A generalization algorithm based on syntactic subsumption can
be used in place of semantic generalization to improve the asymptotic
complexity of the resulting learning algorithm. Finally, we apply this
algorithm to the task of learning relational event definitions from video and
show that it yields definitions that are competitive with hand-coded ones
An Overview of Backtrack Search Satisfiability Algorithms
Propositional Satisfiability (SAT) is often used as the underlying model for a significan
Recommended from our members
A first empirical study of direct combination in a ubiquitous environment
In dynamic ubiquitous environments, end users may need to create services by causing two or more devices or resources to interoperate together in ad-hoc circumstances. In general, users can find this kind of process hard to manage. At the same time, existing UI architectures are not well suited to supporting such activities. It is proposed that a good basis for addressing these and related problems in a principled, scaleable way is the principle of Direct Combination (DC). The principle is summarized, and analytical arguments are presented that predict that DC can reduce the amount of search required by the user. Other things being equal, such a reduction in search would be expected to offer interactions which are faster, less frustrating, and impose less mental load on the user. We present a proof-of-concept implementation, and a small-scale evaluation of a DC interface. Within the limitations of a preliminary evaluation, consistent support is offered across several measures for the analytical predictions
Direct combination: a new user interaction principle for mobile and ubiquitous HCI
Direct Combination (DC) is a recently introduced user interaction principle. The principle (previously applied to desktop computing) can greatly reduce the degree of search, time, and attention required to operate user interfaces. We argue that Direct Combination applies particularly aptly to mobile computing devices, given appropriate interaction techniques, examples of which are presented here. The reduction in search afforded to users can be applied to address several issues in mobile and ubiquitous user interaction including: limited feedback bandwidth; minimal attention situations; and the need for ad-hoc spontaneous interoperation and dynamic reconfiguration of multiple devices. When Direct Combination is extended and adapted to fit the demands of mobile and ubiquitous HCI, we refer to it as Ambient Combination (AC) . Direct Combination allows the user to exploit objects in the environment to narrow down the range of interactions that need be considered (by system and user). When the DC technique of pairwise or n-fold combination is applicable, it can greatly lessen the demands on users for memorisation and interface navigation. Direct Combination also appears to offers a new way of applying context-aware information. In this paper, we present Direct Combination as applied ambiently through a series of interaction scenarios, using an implemented prototype system
Logical Reduction of Metarules
International audienceMany forms of inductive logic programming (ILP) use metarules, second-order Horn clauses, to define the structure of learnable programs and thus the hypothesis space. Deciding which metarules to use for a given learning task is a major open problem and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules, so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. In this paper, we study whether fragments of metarules can be logically reduced to minimal finite subsets. We consider two traditional forms of logical reduction: subsumption and entailment. We also consider a new reduction technique called derivation reduction, which is based on SLD-resolution. We compute reduced sets of metarules for fragments relevant to ILP and theoretically show whether these reduced sets are reductions for more general infinite fragments. We experimentally compare learning with reduced sets of metarules on three domains: Michalski trains, string transformations, and game rules. In general, derivation reduced sets of metarules outperform subsumption and entailment reduced sets, both in terms of predictive accuracies and learning times
Inductive Logic Programming in Databases: from Datalog to DL+log
In this paper we address an issue that has been brought to the attention of
the database community with the advent of the Semantic Web, i.e. the issue of
how ontologies (and semantics conveyed by them) can help solving typical
database problems, through a better understanding of KR aspects related to
databases. In particular, we investigate this issue from the ILP perspective by
considering two database problems, (i) the definition of views and (ii) the
definition of constraints, for a database whose schema is represented also by
means of an ontology. Both can be reformulated as ILP problems and can benefit
from the expressive and deductive power of the KR framework DL+log. We
illustrate the application scenarios by means of examples. Keywords: Inductive
Logic Programming, Relational Databases, Ontologies, Description Logics, Hybrid
Knowledge Representation and Reasoning Systems. Note: To appear in Theory and
Practice of Logic Programming (TPLP).Comment: 30 pages, 3 figures, 2 tables
Efficient Learning and Evaluation of Complex Concepts in Inductive Logic Programming
Inductive Logic Programming (ILP) is a subfield of Machine Learning with foundations in logic
programming. In ILP, logic programming, a subset of first-order logic, is used as a uniform
representation language for the problem specification and induced theories. ILP has been
successfully applied to many real-world problems, especially in the biological domain (e.g. drug
design, protein structure prediction), where relational information is of particular importance.
The expressiveness of logic programs grants flexibility in specifying the learning task and understandability
to the induced theories. However, this flexibility comes at a high computational
cost, constraining the applicability of ILP systems. Constructing and evaluating complex concepts
remain two of the main issues that prevent ILP systems from tackling many learning
problems. These learning problems are interesting both from a research perspective, as they
raise the standards for ILP systems, and from an application perspective, where these target
concepts naturally occur in many real-world applications. Such complex concepts cannot
be constructed or evaluated by parallelizing existing top-down ILP systems or improving the
underlying Prolog engine. Novel search strategies and cover algorithms are needed.
The main focus of this thesis is on how to efficiently construct and evaluate complex hypotheses
in an ILP setting. In order to construct such hypotheses we investigate two approaches.
The first, the Top Directed Hypothesis Derivation framework, implemented in the ILP system
TopLog, involves the use of a top theory to constrain the hypothesis space. In the second approach
we revisit the bottom-up search strategy of Golem, lifting its restriction on determinate
clauses which had rendered Golem inapplicable to many key areas. These developments led to
the bottom-up ILP system ProGolem. A challenge that arises with a bottom-up approach is the
coverage computation of long, non-determinate, clauses. Prolog’s SLD-resolution is no longer
adequate. We developed a new, Prolog-based, theta-subsumption engine which is significantly
more efficient than SLD-resolution in computing the coverage of such complex clauses.
We provide evidence that ProGolem achieves the goal of learning complex concepts by presenting
a protein-hexose binding prediction application. The theory ProGolem induced has
a statistically significant better predictive accuracy than that of other learners. More importantly,
the biological insights ProGolem’s theory provided were judged by domain experts to
be relevant and, in some cases, novel
Schema Independent Relational Learning
Learning novel concepts and relations from relational databases is an
important problem with many applications in database systems and machine
learning. Relational learning algorithms learn the definition of a new relation
in terms of existing relations in the database. Nevertheless, the same data set
may be represented under different schemas for various reasons, such as
efficiency, data quality, and usability. Unfortunately, the output of current
relational learning algorithms tends to vary quite substantially over the
choice of schema, both in terms of learning accuracy and efficiency. This
variation complicates their off-the-shelf application. In this paper, we
introduce and formalize the property of schema independence of relational
learning algorithms, and study both the theoretical and empirical dependence of
existing algorithms on the common class of (de) composition schema
transformations. We study both sample-based learning algorithms, which learn
from sets of labeled examples, and query-based algorithms, which learn by
asking queries to an oracle. We prove that current relational learning
algorithms are generally not schema independent. For query-based learning
algorithms we show that the (de) composition transformations influence their
query complexity. We propose Castor, a sample-based relational learning
algorithm that achieves schema independence by leveraging data dependencies. We
support the theoretical results with an empirical study that demonstrates the
schema dependence/independence of several algorithms on existing benchmark and
real-world datasets under (de) compositions
- …