13,230 research outputs found
Low Size-Complexity Inductive Logic Programming: The East-West Challenge Considered as a Problem in Cost-Sensitive Classification
The Inductive Logic Programming community has considered
proof-complexity and model-complexity, but, until recently,
size-complexity has received little attention. Recently a
challenge was issued "to the international computing community"
to discover low size-complexity Prolog programs for classifying
trains. The challenge was based on a problem first proposed by
Ryszard Michalski, 20 years ago. We interpreted the challenge
as a problem in cost-sensitive classification and we applied a
recently developed cost-sensitive classifier to the competition.
Our algorithm was relatively successful (we won a prize). This
paper presents our algorithm and analyzes the results of the
competition
Learning to Understand by Evolving Theories
In this paper, we describe an approach that enables an autonomous system to
infer the semantics of a command (i.e. a symbol sequence representing an
action) in terms of the relations between changes in the observations and the
action instances. We present a method of how to induce a theory (i.e. a
semantic description) of the meaning of a command in terms of a minimal set of
background knowledge. The only thing we have is a sequence of observations from
which we extract what kinds of effects were caused by performing the command.
This way, we yield a description of the semantics of the action and, hence, a
definition.Comment: KRR Workshop at ICLP 201
On Cognitive Preferences and the Plausibility of Rule-based Models
It is conventional wisdom in machine learning and data mining that logical
models such as rule sets are more interpretable than other models, and that
among such rule-based models, simpler models are more interpretable than more
complex ones. In this position paper, we question this latter assumption by
focusing on one particular aspect of interpretability, namely the plausibility
of models. Roughly speaking, we equate the plausibility of a model with the
likeliness that a user accepts it as an explanation for a prediction. In
particular, we argue that, all other things being equal, longer explanations
may be more convincing than shorter ones, and that the predominant bias for
shorter models, which is typically necessary for learning powerful
discriminative models, may not be suitable when it comes to user acceptance of
the learned models. To that end, we first recapitulate evidence for and against
this postulate, and then report the results of an evaluation in a
crowd-sourcing study based on about 3.000 judgments. The results do not reveal
a strong preference for simple rules, whereas we can observe a weak preference
for longer rules in some domains. We then relate these results to well-known
cognitive biases such as the conjunction fallacy, the representative heuristic,
or the recogition heuristic, and investigate their relation to rule length and
plausibility.Comment: V4: Another rewrite of section on interpretability to clarify focus
on plausibility and relation to interpretability, comprehensibility, and
justifiabilit
Recommended from our members
Using inductive types for ensuring correctness of neuro-symbolic computations
- …