3,377 research outputs found
Efficient instance and hypothesis space revision in Meta-Interpretive Learning
Inductive Logic Programming (ILP) is a form of Machine Learning. The goal of ILP is to induce hypotheses, as logic programs, that generalise training examples. ILP is characterised by a high expressivity, generalisation ability and interpretability. Meta-Interpretive Learning (MIL) is a state-of-the-art sub-field of ILP. However, current MIL approaches have limited efficiency: the sample and learning complexity respectively are polynomial and exponential in the number of clauses. My thesis is that improvements over the sample and learning complexity can be achieved in MIL through instance and hypothesis space revision. Specifically, we investigate 1) methods that revise the instance space, 2) methods that revise the hypothesis space and 3) methods that revise both the instance and the hypothesis spaces for achieving more efficient MIL.
First, we introduce a method for building training sets with active learning in Bayesian MIL. Instances are selected maximising the entropy. We demonstrate this method can reduce the sample complexity and supports efficient learning of agent strategies. Second, we introduce a new method for revising the MIL hypothesis space with predicate invention. Our method generates predicates bottom-up from the background knowledge related to the training examples. We demonstrate this method is complete and can reduce the learning and sample complexity. Finally, we introduce a new MIL system called MIGO for learning optimal two-player game strategies. MIGO learns from playing: its training sets are built from the sequence of actions it chooses. Moreover, MIGO revises its hypothesis space with Dependent Learning: it first solves simpler tasks and can reuse any learned solution for solving more complex tasks. We demonstrate MIGO significantly outperforms both classical and deep reinforcement learning. The methods presented in this thesis open exciting perspectives for efficiently learning theories with MIL in a wide range of applications including robotics, modelling of agent strategies and game playing.Open Acces
Generative Linguistics Meets Normative Inferentialism: Part 1
This is the first installment of a two-part essay. Limitations of space prevented the publication of the full essay in present issue of the Journal. The second installment will appear in the next issue, 2021 (1). My overall goal is to outline a strategy for integrating generative linguistics with a broadly pragmatist approach to meaning and communication. Two immensely useful guides in this venture are Robert Brandom and Paul Pietroski. Squarely in the Chomskyan tradition, Pietroski’s recent book, Conjoining Meanings, offers an approach to natural-language semantics that rejects foundational assumptions widely held amongst philosophers and linguists. In particular, he argues against extensionalism—the view that meanings are (or determine) truth and satisfaction conditions. Having arrived at the same conclusion by way of Brandom’s deflationist account of truth and reference, I’ll argue that both theorists have important contributions to make to a broader anti-extensionalist approach to language. What appears here as Part 1 of the essay is largely exegetical, laying out what I see as the core aspects of Brandom’s normative inferentialism (§1) and Pietroski’s naturalistic semantics (§2). In Part 2 (next issue), I argue that there are many convergences between these two theoretical frameworks and, contrary to fi rst appearances, very few points of substantive disagreement between them. If the integration strategy that I propose is correct, then what appear to be sharply contrasting commitments are better seen as interrelated verbal differences that come down to different—but complementary—explanatory goals. The residual disputes are, however, stubborn. I end by discussing how to square Pietroski’s commitment to predicativism with Brandom’s argument that a predicativist language is in principle incapable of expressing ordinary conditionals
Generative Linguistics Meets Normative Inferentialism: Part 1
This is the first installment of a two-part essay. Limitations of space prevented the publication of the full essay in present issue of the Journal. The second installment will appear in the next issue, 2021 (1). My overall goal is to outline a strategy for integrating generative linguistics with a broadly pragmatist approach to meaning and communication. Two immensely useful guides in this venture are Robert Brandom and Paul Pietroski. Squarely in the Chomskyan tradition, Pietroski’s recent book, Conjoining Meanings, offers an approach to natural-language semantics that rejects foundational assumptions widely held amongst philosophers and linguists. In particular, he argues against extensionalism—the view that meanings are (or determine) truth and satisfaction conditions. Having arrived at the same conclusion by way of Brandom’s deflationist account of truth and reference, I’ll argue that both theorists have important contributions to make to a broader anti-extensionalist approach to language. What appears here as Part 1 of the essay is largely exegetical, laying out what I see as the core aspects of Brandom’s normative inferentialism (§1) and Pietroski’s naturalistic semantics (§2). In Part 2 (next issue), I argue that there are many convergences between these two theoretical frameworks and, contrary to fi rst appearances, very few points of substantive disagreement between them. If the integration strategy that I propose is correct, then what appear to be sharply contrasting commitments are better seen as interrelated verbal differences that come down to different—but complementary—explanatory goals. The residual disputes are, however, stubborn. I end by discussing how to square Pietroski’s commitment to predicativism with Brandom’s argument that a predicativist language is in principle incapable of expressing ordinary conditionals
What Kind of Necessary Being Could God Be?
A logically impossible sentence is one which entails a contradiction, a logically necessary sentence is one whose negation entails a contradiction, and a logically possible sentence is one which does not entail a contradiction. Metaphysically impossible, necessary and possible sentences are ones which become logically impossible, necessary, or possible by substituting what I call informative rigid designators for uninformative ones. It does seem very strongly that a negative existential sentence cannot entail a contradiction, and so ”there is a God’ cannot be a metaphysically necessary truth. If it were such a truth, innumerable other sentences which seem paradigm examples of logically possible sentences, such as ”no one knows everything’ would turn out to be logically impossible. The only way in which God could be a logically necessary being is if there were eternal necessary propositions independent of human language or God’s will, such that the proposition that there is no God would entail -- via propositions inaccessible to us -- a contradiction. But if there were such propositions, God would have less control over the universe than he would have otherwise
Quantum Non-Objectivity from Performativity of Quantum Phenomena
We analyze the logical foundations of quantum mechanics (QM) by stressing
non-objectivity of quantum observables which is a consequence of the absence of
logical atoms in QM. We argue that the matter of quantum non-objectivity is
that, on the one hand, the formalism of QM constructed as a mathematical theory
is self-consistent, but, on the other hand, quantum phenomena as results of
experimenter's performances are not self-consistent. This self-inconsistency is
an effect of that the language of QM differs much from the language of human
performances. The first is the language of a mathematical theory which uses
some Aristotelian and Russellian assumptions (e.g., the assumption that there
are logical atoms). The second language consists of performative propositions
which are self-inconsistent only from the viewpoint of conventional
mathematical theory, but they satisfy another logic which is non-Aristotelian.
Hence, the representation of quantum reality in linguistic terms may be
different: from a mathematical theory to a logic of performative propositions.
To solve quantum self-inconsistency, we apply the formalism of non-classical
self-referent logics
Logic Programs as Declarative and Procedural Bias in Inductive Logic Programming
Machine Learning is necessary for the development of Artificial Intelligence, as pointed out by Turing in his 1950 article ``Computing Machinery and Intelligence''. It is in the same article that Turing suggested the use of computational logic and background knowledge for learning. This thesis follows a logic-based machine learning approach called Inductive Logic Programming (ILP), which is advantageous over other machine learning approaches in terms of relational learning and utilising background knowledge. ILP uses logic programs as a uniform representation for hypothesis, background knowledge and examples, but its declarative bias is usually encoded using metalogical statements. This thesis advocates the use of logic programs to represent declarative and procedural bias, which results in a framework of single-language representation.
We show in this thesis that using a logic program called the top theory as declarative bias leads to a sound and complete multi-clause learning system MC-TopLog. It overcomes the entailment-incompleteness of Progol, thus outperforms Progol in terms of predictive accuracies on learning grammars and strategies for playing Nim game. MC-TopLog has been applied to two real-world applications funded by Syngenta, which is an agriculture company.
A higher-order extension on top theories results in meta-interpreters, which allow the introduction of new predicate symbols. Thus the resulting ILP system Metagol can do predicate invention, which is an intrinsically higher-order logic operation. Metagol also leverages the procedural semantic of Prolog to encode procedural bias, so that it can outperform both its ASP version and ILP systems without an equivalent procedural bias in terms of efficiency and accuracy. This is demonstrated by the experiments on learning Regular, Context-free and Natural grammars. Metagol is also applied to non-grammar learning tasks involving recursion and predicate invention, such as learning a definition of staircases and robot strategy learning. Both MC-TopLog and Metagol are based on a -directed framework, which is different from other multi-clause learning systems based on Inverse Entailment, such as CF-Induction, XHAIL and IMPARO. Compared to another -directed multi-clause learning system TAL, Metagol allows the explicit form of higher-order assumption to be encoded in the form of meta-rules.Open Acces
IDEF3 formalization report
The Process Description Capture Method (IDEF3) is one of several Integrated Computer-Aided Manufacturing (ICAM) DEFinition methods developed by the Air Force to support systems engineering activities, and in particular, to support information systems development. These methods have evolved as a distillation of 'good practice' experience by information system developers and are designed to raise the performance level of the novice practitioner to one comparable with that of an expert. IDEF3 is meant to serve as a knowledge acquisition and requirements definition tool that structures the user's understanding of how a given process, event, or system works around process descriptions. A special purpose graphical language accompanying the method serves to highlight temporal precedence and causality relationships relative to the process or event being described
- …