27,008 research outputs found

    Logic Programs as Declarative and Procedural Bias in Inductive Logic Programming

    Get PDF
    Machine Learning is necessary for the development of Artificial Intelligence, as pointed out by Turing in his 1950 article ``Computing Machinery and Intelligence''. It is in the same article that Turing suggested the use of computational logic and background knowledge for learning. This thesis follows a logic-based machine learning approach called Inductive Logic Programming (ILP), which is advantageous over other machine learning approaches in terms of relational learning and utilising background knowledge. ILP uses logic programs as a uniform representation for hypothesis, background knowledge and examples, but its declarative bias is usually encoded using metalogical statements. This thesis advocates the use of logic programs to represent declarative and procedural bias, which results in a framework of single-language representation. We show in this thesis that using a logic program called the top theory as declarative bias leads to a sound and complete multi-clause learning system MC-TopLog. It overcomes the entailment-incompleteness of Progol, thus outperforms Progol in terms of predictive accuracies on learning grammars and strategies for playing Nim game. MC-TopLog has been applied to two real-world applications funded by Syngenta, which is an agriculture company. A higher-order extension on top theories results in meta-interpreters, which allow the introduction of new predicate symbols. Thus the resulting ILP system Metagol can do predicate invention, which is an intrinsically higher-order logic operation. Metagol also leverages the procedural semantic of Prolog to encode procedural bias, so that it can outperform both its ASP version and ILP systems without an equivalent procedural bias in terms of efficiency and accuracy. This is demonstrated by the experiments on learning Regular, Context-free and Natural grammars. Metagol is also applied to non-grammar learning tasks involving recursion and predicate invention, such as learning a definition of staircases and robot strategy learning. Both MC-TopLog and Metagol are based on a ⊤\top-directed framework, which is different from other multi-clause learning systems based on Inverse Entailment, such as CF-Induction, XHAIL and IMPARO. Compared to another ⊤\top-directed multi-clause learning system TAL, Metagol allows the explicit form of higher-order assumption to be encoded in the form of meta-rules.Open Acces

    Simulating optional infinitive errors in child speech through the omission of sentence-internal elements.

    Get PDF
    A new version of the MOSAIC model of syntax acquisition is presented. The modifications to the model aim to address two weaknesses in its earlier simulations of the Optional Infinitive phenomenon: an over-reliance on questions in the input as the source for Optional Infinitive errors, and the use of an utterance-final bias in learning (recency effect), without a corresponding utterance-initial bias (primacy effect). Where the old version only produced utterance-final phrases, the new version of MOSAIC learns from both the left and right edge of the utterance, and associates utterance-initial and utterancefinal phrases. The new model produces both utterance-final phrases and concatenations of utterance-final and utteranceinitial phrases. MOSAIC now also differentiates between phrases learned from declarative and interrogative input. It will be shown that the new version is capable of simulating the Optional Infinitive phenomenon in English and Dutch without relying on interrogative input. Unlike the previous version of MOSAIC, the new version is also capable of simulating cross-linguistic variation in the occurrence of Optional Infinitive errors in Wh-questions

    Explicit learning in ACT-R

    Get PDF
    A popular distinction in the learning literature is the distinction between implicit and explicit learning. Although many studies elaborate on the nature of implicit learning, little attention is left for explicit learning. The unintentional aspect of implicit learning corresponds well to the mechanistic view of learning employed in architectures of cognition. But how to account for deliberate, intentional, explicit learning? This chapter argues that explicit learning can be explained by strategies that exploit implicit learning mechanisms. This idea is explored and modelled using the ACT-R theory (Anderson, 1993). An explicit strategy for learning facts in ACT-RÂ’s declarative memory is rehearsal, a strategy that uses ACT-RÂ’s activation learning mechanisms to gain deliberate control over what is learned. In the same sense, strategies for explicit procedural learning are proposed. Procedural learning in ACT-R involves generalisation of examples. Explicit learning rules can create and manipulate these examples. An example of these explicit rules will be discussed. These rules are general enough to be able to model the learning of three different tasks. Furthermore, the last of these models can explain the difference between adults and children in the discrimination-shift task

    Lateralised sleep spindles relate to false memory generation

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Sleep is known to enhance false memories: After presenting participants with lists of semantically related words, sleeping before recalling these words results in a greater acceptance of unseen “lure” words related in theme to previously seen words. Furthermore, the right hemisphere (RH) seems to be more prone to false memories than the left hemisphere (LH). In the current study, we investigated the sleep architecture associated with these false memory and lateralisation effects in a nap study. Participants viewed lists of related words, then stayed awake or slept for approximately 90 min, and were then tested for recognition of previously seen-old, unseen-new, or unseen-lure words presented either to the LH or RH. Sleep increased acceptance of unseen-lure words as previously seen compared to the wake group, particularly for RH presentations of word lists. RH lateralised stage 2 sleep spindle density relative to the LH correlated with this increase in false memories, suggesting that RH sleep spindles enhanced false memories in the RH

    Learning from interpreting transitions in explainable deep learning for biometrics

    Full text link
    Máster Universitario en Métodos Formales en Ingeniería InformáticaWith the rapid development of machine learning algorithms, it has been applied to almost every aspect of tasks, such as natural language processing, marketing prediction. The usage of machine learning algorithms is also growing in human resources departments like the hiring pipeline. However, typical machine learning algorithms learn from the data collected from society, and therefore the model learned may inherently reflect the current and historical biases, and there are relevant machine learning algorithms that have been shown to make decisions largely influenced by gender or ethnicity. How to reason about the bias of decisions made by machine learning algorithms has attracted more and more attention. Neural structures, such as deep learning ones (the most successful machine learning based on statistical learning) lack the ability of explaining their decisions. The domain depicted in this point is just one example in which explanations are needed. Situations like this are in the origin of explainable AI. It is the domain of interest for this project. The nature of explanations is rather declarative instead of numerical. The hypothesis of this project is that declarative approaches to machine learning could be crucial in explainable A

    A Rational Analysis of Alternating Search and Reflection Strategies in Problem Solving

    Get PDF
    In this paper two approaches to problem solving, search and reflection, are discussed, and combined in two models, both based on rational analysis (Anderson, 1990). The first model is a dynamic growth model, which shows that alternating search and reflection is a rational strategy. The second model is a model in ACT-R, which can discover and revise strategies to solve simple problems. Both models exhibit the explore-insight pattern normally attributed to insight problem solving
    • …
    corecore