14 research outputs found

    Logical Reduction of Metarules

    Get PDF
    International audienceMany forms of inductive logic programming (ILP) use metarules, second-order Horn clauses, to define the structure of learnable programs and thus the hypothesis space. Deciding which metarules to use for a given learning task is a major open problem and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules, so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. In this paper, we study whether fragments of metarules can be logically reduced to minimal finite subsets. We consider two traditional forms of logical reduction: subsumption and entailment. We also consider a new reduction technique called derivation reduction, which is based on SLD-resolution. We compute reduced sets of metarules for fragments relevant to ILP and theoretically show whether these reduced sets are reductions for more general infinite fragments. We experimentally compare learning with reduced sets of metarules on three domains: Michalski trains, string transformations, and game rules. In general, derivation reduced sets of metarules outperform subsumption and entailment reduced sets, both in terms of predictive accuracies and learning times

    Learning from interpreting transitions in explainable deep learning for biometrics

    Full text link
    Máster Universitario en Métodos Formales en Ingeniería InformáticaWith the rapid development of machine learning algorithms, it has been applied to almost every aspect of tasks, such as natural language processing, marketing prediction. The usage of machine learning algorithms is also growing in human resources departments like the hiring pipeline. However, typical machine learning algorithms learn from the data collected from society, and therefore the model learned may inherently reflect the current and historical biases, and there are relevant machine learning algorithms that have been shown to make decisions largely influenced by gender or ethnicity. How to reason about the bias of decisions made by machine learning algorithms has attracted more and more attention. Neural structures, such as deep learning ones (the most successful machine learning based on statistical learning) lack the ability of explaining their decisions. The domain depicted in this point is just one example in which explanations are needed. Situations like this are in the origin of explainable AI. It is the domain of interest for this project. The nature of explanations is rather declarative instead of numerical. The hypothesis of this project is that declarative approaches to machine learning could be crucial in explainable A

    Explanatory machine learning for sequential human teaching

    Full text link
    The topic of comprehensibility of machine-learned theories has recently drawn increasing attention. Inductive Logic Programming (ILP) uses logic programming to derive logic theories from small data based on abduction and induction techniques. Learned theories are represented in the form of rules as declarative descriptions of obtained knowledge. In earlier work, the authors provided the first evidence of a measurable increase in human comprehension based on machine-learned logic rules for simple classification tasks. In a later study, it was found that the presentation of machine-learned explanations to humans can produce both beneficial and harmful effects in the context of game learning. We continue our investigation of comprehensibility by examining the effects of the ordering of concept presentations on human comprehension. In this work, we examine the explanatory effects of curriculum order and the presence of machine-learned explanations for sequential problem-solving. We show that 1) there exist tasks A and B such that learning A before B has a better human comprehension with respect to learning B before A and 2) there exist tasks A and B such that the presence of explanations when learning A contributes to improved human comprehension when subsequently learning B. We propose a framework for the effects of sequential teaching on comprehension based on an existing definition of comprehensibility and provide evidence for support from data collected in human trials. Empirical results show that sequential teaching of concepts with increasing complexity a) has a beneficial effect on human comprehension and b) leads to human re-discovery of divide-and-conquer problem-solving strategies, and c) studying machine-learned explanations allows adaptations of human problem-solving strategy with better performance.Comment: Submitted to the International Joint Conference on Learning & Reasoning (IJCLR) 202
    corecore