5,586 research outputs found
Learning programs by learning from failures
We describe an inductive logic programming (ILP) approach called learning
from failures. In this approach, an ILP system (the learner) decomposes the
learning problem into three separate stages: generate, test, and constrain. In
the generate stage, the learner generates a hypothesis (a logic program) that
satisfies a set of hypothesis constraints (constraints on the syntactic form of
hypotheses). In the test stage, the learner tests the hypothesis against
training examples. A hypothesis fails when it does not entail all the positive
examples or entails a negative example. If a hypothesis fails, then, in the
constrain stage, the learner learns constraints from the failed hypothesis to
prune the hypothesis space, i.e. to constrain subsequent hypothesis generation.
For instance, if a hypothesis is too general (entails a negative example), the
constraints prune generalisations of the hypothesis. If a hypothesis is too
specific (does not entail all the positive examples), the constraints prune
specialisations of the hypothesis. This loop repeats until either (i) the
learner finds a hypothesis that entails all the positive and none of the
negative examples, or (ii) there are no more hypotheses to test. We introduce
Popper, an ILP system that implements this approach by combining answer set
programming and Prolog. Popper supports infinite problem domains, reasoning
about lists and numbers, learning textually minimal programs, and learning
recursive programs. Our experimental results on three domains (toy game
problems, robot strategies, and list transformations) show that (i) constraints
drastically improve learning performance, and (ii) Popper can outperform
existing ILP systems, both in terms of predictive accuracies and learning
times.Comment: Accepted for the machine learning journa
Inductive logic programming at 30
Inductive logic programming (ILP) is a form of logic-based machine learning.
The goal of ILP is to induce a hypothesis (a logic program) that generalises
given training examples and background knowledge. As ILP turns 30, we survey
recent work in the field. In this survey, we focus on (i) new meta-level search
methods, (ii) techniques for learning recursive programs that generalise from
few examples, (iii) new approaches for predicate invention, and (iv) the use of
different technologies, notably answer set programming and neural networks. We
conclude by discussing some of the current limitations of ILP and discuss
directions for future research.Comment: Extension of IJCAI20 survey paper. arXiv admin note: substantial text
overlap with arXiv:2002.11002, arXiv:2008.0791
Logical Reduction of Metarules
International audienceMany forms of inductive logic programming (ILP) use metarules, second-order Horn clauses, to define the structure of learnable programs and thus the hypothesis space. Deciding which metarules to use for a given learning task is a major open problem and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules, so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. In this paper, we study whether fragments of metarules can be logically reduced to minimal finite subsets. We consider two traditional forms of logical reduction: subsumption and entailment. We also consider a new reduction technique called derivation reduction, which is based on SLD-resolution. We compute reduced sets of metarules for fragments relevant to ILP and theoretically show whether these reduced sets are reductions for more general infinite fragments. We experimentally compare learning with reduced sets of metarules on three domains: Michalski trains, string transformations, and game rules. In general, derivation reduced sets of metarules outperform subsumption and entailment reduced sets, both in terms of predictive accuracies and learning times
Judicial decision-making and extra-legal influences: Neurolinguistic Programming as a candidate framework to understand persuasion in the legal context
Jurister försöker pĂ„verka rĂ€ttsliga beslutsprocesser med hjĂ€lp av övertalning, men den befintliga litteraturen om övertalning i rĂ€ttssalen Ă€r förvĂ„nansvĂ€rt begrĂ€nsad med fokus pĂ„ enskilda tekniker i isolering; inga omfattande integrerade ramverk finns tillgĂ€ngliga. Vi föreslĂ„r en populĂ€r kommersiell metod för övertalning, Neurolingvistisk Programmering (NLP), som startpunkt för att utveckla en modell som kan fylla detta gap. Först presenterar vi en bred analys av rĂ€ttsliga beslutsprocesser och utomrĂ€ttsliga faktorer som pĂ„verkar dem. DĂ€refter utsĂ€tter vi centrala aspekter av NLP för noggrann granskning. Slutligen syntetiserar vi dessa trĂ„dar i en mĂ„ngfacetterad bedömning av NLPs potentiella anvĂ€ndbarhet som ett omfattande och integrerat ramverk för att förstĂ„ och beskriva juristers övertalningsprocesser i rĂ€ttssalen. Vi hĂ€vdar att NLP kan beskriva dessa beteenden och strategier bĂ„de genom en sjĂ€lvreflexiv logik, som ett resultat av dess breda inflytande, men ocksĂ„ för mer generella övertalningsprocesser tack vare ett stort antal överensstĂ€mmelser mellan NLP-begrepp och resultat frĂ„n vetenskaplig litteratur. Ăven om dessa överensstĂ€mmelser Ă€r ytliga, tyder det faktum att NLP integrerar sina förenklade koncept i ett sammanhĂ„llet ramverk, som spĂ€nner argumentations- och presentations-dimensioner för övertalning, att det förhĂ„llandevis enkelt kan anpassas till en praktisk modell för att beskriva och förstĂ„ övertalning i rĂ€ttssalen. Vidare forskning Ă€r indikerad.Trial advocates seek to influence the outcomes of judicial decision-making processes using persuasion, but the existing literature regarding persuasion in the courtroom is surprisingly piecemeal, focusing on individual techniques in isolation; no comprehensive frameworks for integrating these techniques, or for systematically analyzing advocatesâ attempts to enact persuasion in the courtroom, have been developed. We propose a popular commercial technology for persuasion, Neurolinguistic Programming (NLP), as a candidate framework that might be modified and adapted to fill this gap. First we present a wide-ranging, discursive analysis of judicial decision-making processes and extra-legal factors that influence them. Next, core aspects of NLP theory are subjected to careful examination. Finally, these threads are synthesized into a multifaceted assessment of NLPâs potential utility as a comprehensive and integrative framework for understanding and describing how litigators enact persuasion in the courtroom. We argue that NLP can describe these behaviors and strategies both by way of a self-reflexive logic resulting from its popular influence, but also as a more general, context independent model by virtue of a large number of correspondences between NLP concepts and findings from the scholarly literature. Although these correspondences are superficial, the fact that NLP integrates its simplified, folk concepts into a coherent framework spanning argumentative and presentational dimensions of persuasion suggests that it might readily be adapted into a useful descriptive model for understanding persuasion in the courtroom. Further scholarly attention is indicated
Learning programs with magic values
A magic value in a program is a constant symbol that is essential for the
execution of the program but has no clear explanation for its choice. Learning
programs with magic values is difficult for existing program synthesis
approaches. To overcome this limitation, we introduce an inductive logic
programming approach to efficiently learn programs with magic values. Our
experiments on diverse domains, including program synthesis, drug design, and
game playing, show that our approach can (i) outperform existing approaches in
terms of predictive accuracies and learning times, (ii) learn magic values from
infinite domains, such as the value of pi, and (iii) scale to domains with
millions of constant symbols
Recommended from our members
Cloud computing and context-awareness: A study of the adapted user experience
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Today, mobile technology is part of everyday life and activities and the mobile
ecosystems are blossoming, with smartphones and tablets being the major growth drivers. The mobile phones are no longer just another device, we rely on their capabilities in work and in private. We look to our mobile phones for timely and updated information and we rely on this being provided any time of any day at any place. Nevertheless, no matter how much you trust and love your mobile phone the quality of the information and the user experience is directly associated with the sources and presentation of information. In this perspective, our activities, interactions and preferences help shape the quality of service, content and products we use. Context-aware systems use such information about end-users as input mechanisms for producing applications based on mobile, location, social, cloud and customized content services. This represents new possibilities for extracting aggregated user-centric information and includes novel
sources for context-aware applications. Accordingly, a Design Research based
approach has been taken to further investigate the creation, presentation and tailoring of user-centric information. Through user evaluated experiments findings show how multi-dimensional context-aware information can be used to create adaptive
solutions tailoring the user experience to the usersâ needs. Research findings in this
work; highlight possible architectures for integration of cloud computing services in
a heterogeneous mobile environment in future context-aware solutions. When it comes to combining context-aware results from local computations with those of cloud based services, the results provide findings that give users tailored and adapted experiences based on the collective efforts of the two
Inductive logic programming at 30: a new introduction
Inductive logic programming (ILP) is a form of machine learning. The goal of
ILP is to induce a hypothesis (a set of logical rules) that generalises
training examples. As ILP turns 30, we provide a new introduction to the field.
We introduce the necessary logical notation and the main learning settings;
describe the building blocks of an ILP system; compare several systems on
several dimensions; describe four systems (Aleph, TILDE, ASPAL, and Metagol);
highlight key application areas; and, finally, summarise current limitations
and directions for future research.Comment: Paper under revie
- âŠ