1,019 research outputs found

    Learning a theory of causality

    Get PDF
    The very early appearance of abstract knowledge is often taken as evidence for innateness. We explore the relative learning speeds of abstract and specific knowledge within a Bayesian framework and the role for innate structure. We focus on knowledge about causality, seen as a domain-general intuitive theory, and ask whether this knowledge can be learned from co-occurrence of events. We begin by phrasing the causal Bayes nets theory of causality and a range of alternatives in a logical language for relational theories. This allows us to explore simultaneous inductive learning of an abstract theory of causality and a causal model for each of several causal systems. We find that the correct theory of causality can be learned relatively quickly, often becoming available before specific causal theories have been learned—an effect we term the blessing of abstraction. We then explore the effect of providing a variety of auxiliary evidence and find that a collection of simple perceptual input analyzers can help to bootstrap abstract knowledge. Together, these results suggest that the most efficient route to causal knowledge may be to build in not an abstract notion of causality but a powerful inductive learning mechanism and a variety of perceptual supports. While these results are purely computational, they have implications for cognitive development, which we explore in the conclusion.James S. McDonnell Foundation (Causal Learning Collaborative Initiative)United States. Office of Naval Research (Grant N00014-09-0124)United States. Air Force Office of Scientific Research (Grant FA9550-07-1-0075)United States. Army Research Office (Grant W911NF-08-1-0242

    Logical word learning: The case of kinship

    Get PDF
    We examine the conceptual development of kinship through the lens of program induction. We present a computational model for the acquisition of kinship term concepts, resulting in the first computational model of kinship learning that is closely tied to developmental phenomena. We demonstrate that our model can learn several kinship systems of varying complexity using cross-linguistic data from English, Pukapuka, Turkish, and Yanomamö. More importantly, the behavioral patterns observed in children learning kinship terms, under-extension and over-generalization, fall out naturally from our learning model. We then conducted interviews to simulate realistic learning environments and demonstrate that the characteristic-to-defining shift is a consequence of our learning model in naturalistic contexts containing abstract and concrete features. We use model simulations to understand the influence of logical simplicity and children’s learning environment on the order of acquisition of kinship terms, providing novel predictions for the learning trajectories of these words. We conclude with a discussion of how this model framework generalizes beyond kinship terms, as well as a discussion of its limitations

    Approximations in Learning & Program Analysis

    Get PDF
    In this work we compare and contrast the approximations made in the problems of Data Compression, Program Analysis and Supervised Machine Learning. G\uf6del\u2019s Incompleteness Theorem mandates that any formal system rich enough to include integers will have unprovable truths. Thus non computable problems abound, including, but not limited to, Program Analysis, Data Compression and Machine Learning. Indeed, it can be shown that there are more non-computable functions than computable. Due to non- computability, precise solutions for these problems are not feasible, and only approximate solutions may be computed. Presently, each of these problems of Data Compression, Machine Learning and Program Analysis is studied independently. Each problem has it\u2019s own multitude of abstractions, algorithms and notions of tradeoffs among the various parameters. It would be interesting to have a unified framework, across disciplines, that makes explicit the abstraction specifications and ensuing tradeoffs. Such a framework would promote inter-disciplinary research and develop a unified body of knowledge to tackle non-computable problems. As a small step to that larger goal, we propose an Information Oriented Model of Computation that allows comparing the approximations used in Data Compression, Program Analysis and Machine Learning. To the best of our knowledge, this is the first work to propose a method for systematic comparison of approximations across disciplines. The model describes computation as set reconstruction. Non-computability is then presented as inability to perfectly reconstruct sets. In an effort to compare and contrast the approximations, select algorithms for Data Compression, Machine Learning and Program Analysis are analyzed using our model. We were able to relate the problems of Data Compression, Machine Learning and Program Analysis as specific instances of the general problem of approximate set reconstruction. We demonstrate the use of abstract interpreters in compression schemes. We then compare and contrast the approximations in Program Analysis and Supervised Machine Learning. We demonstrate the use of ordered structures, fixpoint equations and least fixpoint approximation computations, all characteristic of Abstract Interpretation (Program Analysis) in Machine Learning algorithms. We also present the idea that widening, like regression, is an inductive learner. Regression generalizes known states to a hypothesis. Widening generalizes abstract states on a iteration chain to a fixpoint. While Regression usually aims to minimize the total error (sum of false positives and false negatives), Widening aims for soundness and hence errs on the side of false positives to have zero false negatives. We use this duality to derive a generic widening operator from regression on the set of abstract states. The results of the dissertation are the first steps towards a unified approach to approximate computation. Consequently, our preliminary results lead to a lot more interesting questions, some of which we have tried to discuss in the concluding chapter

    Symbolic Logic meets Machine Learning: A Brief Survey in Infinite Domains

    Get PDF
    The tension between deduction and induction is perhaps the most fundamental issue in areas such as philosophy, cognition and artificial intelligence (AI). The deduction camp concerns itself with questions about the expressiveness of formal languages for capturing knowledge about the world, together with proof systems for reasoning from such knowledge bases. The learning camp attempts to generalize from examples about partial descriptions about the world. In AI, historically, these camps have loosely divided the development of the field, but advances in cross-over areas such as statistical relational learning, neuro-symbolic systems, and high-level control have illustrated that the dichotomy is not very constructive, and perhaps even ill-formed. In this article, we survey work that provides further evidence for the connections between logic and learning. Our narrative is structured in terms of three strands: logic versus learning, machine learning for logic, and logic for machine learning, but naturally, there is considerable overlap. We place an emphasis on the following "sore" point: there is a common misconception that logic is for discrete properties, whereas probability theory and machine learning, more generally, is for continuous properties. We report on results that challenge this view on the limitations of logic, and expose the role that logic can play for learning in infinite domains

    Abstracting Probabilistic Models: Relations, Constraints and Beyond

    Get PDF

    From simple to complex categories: how structure and label information guides the acquisition of category knowledge

    Get PDF
    Categorization is a fundamental ability of human cognition, translating complex streams of information from the all of different senses into simpler, discrete categories. How do people acquire all of this category knowledge, particularly the kinds of rich, structured categories we interact with every day in the real-world? In this thesis, I explore how information from category structure and category labels influence how people learn categories, particular for the kinds of computational problems that are relevant to real-world category learning. The three learning problems this thesis covers are: semi-supervised learning, structure learning and category learning with many features. Each of these three learning problems presents a different kinds of learning challenge, and through a combination of behavioural experiments and computational modeling, this thesis illustrates how the interplay between structure and label information can explain how humans can acquire richer kinds of category knowledge.Thesis (Ph.D.) (Research by Publication) -- University of Adelaide, School of Psychology, 201

    Efficient instance and hypothesis space revision in Meta-Interpretive Learning

    Get PDF
    Inductive Logic Programming (ILP) is a form of Machine Learning. The goal of ILP is to induce hypotheses, as logic programs, that generalise training examples. ILP is characterised by a high expressivity, generalisation ability and interpretability. Meta-Interpretive Learning (MIL) is a state-of-the-art sub-field of ILP. However, current MIL approaches have limited efficiency: the sample and learning complexity respectively are polynomial and exponential in the number of clauses. My thesis is that improvements over the sample and learning complexity can be achieved in MIL through instance and hypothesis space revision. Specifically, we investigate 1) methods that revise the instance space, 2) methods that revise the hypothesis space and 3) methods that revise both the instance and the hypothesis spaces for achieving more efficient MIL. First, we introduce a method for building training sets with active learning in Bayesian MIL. Instances are selected maximising the entropy. We demonstrate this method can reduce the sample complexity and supports efficient learning of agent strategies. Second, we introduce a new method for revising the MIL hypothesis space with predicate invention. Our method generates predicates bottom-up from the background knowledge related to the training examples. We demonstrate this method is complete and can reduce the learning and sample complexity. Finally, we introduce a new MIL system called MIGO for learning optimal two-player game strategies. MIGO learns from playing: its training sets are built from the sequence of actions it chooses. Moreover, MIGO revises its hypothesis space with Dependent Learning: it first solves simpler tasks and can reuse any learned solution for solving more complex tasks. We demonstrate MIGO significantly outperforms both classical and deep reinforcement learning. The methods presented in this thesis open exciting perspectives for efficiently learning theories with MIL in a wide range of applications including robotics, modelling of agent strategies and game playing.Open Acces
    • 

    corecore