4 research outputs found

    The Emergence of Organizing Structure in Conceptual Representation.

    Get PDF
    Both scientists and children make important structural discoveries, yet their computational underpinnings are not well understood. Structure discovery has previously been formalized as probabilistic inference about the right structural form-where form could be a tree, ring, chain, grid, etc. (Kemp & Tenenbaum, 2008). Although this approach can learn intuitive organizations, including a tree for animals and a ring for the color circle, it assumes a strong inductive bias that considers only these particular forms, and each form is explicitly provided as initial knowledge. Here we introduce a new computational model of how organizing structure can be discovered, utilizing a broad hypothesis space with a preference for sparse connectivity. Given that the inductive bias is more general, the model's initial knowledge shows little qualitative resemblance to some of the discoveries it supports. As a consequence, the model can also learn complex structures for domains that lack intuitive description, as well as predict human property induction judgments without explicit structural forms. By allowing form to emerge from sparsity, our approach clarifies how both the richness and flexibility of human conceptual organization can coexist

    Probabilistic Meta-Representations Of Neural Networks

    Full text link
    Existing Bayesian treatments of neural networks are typically characterized by weak prior and approximate posterior distributions according to which all the weights are drawn independently. Here, we consider a richer prior distribution in which units in the network are represented by latent variables, and the weights between units are drawn conditionally on the values of the collection of those variables. This allows rich correlations between related weights, and can be seen as realizing a function prior with a Bayesian complexity regularizer ensuring simple solutions. We illustrate the resulting meta-representations and representations, elucidating the power of this prior.Comment: presented at UAI 2018 Uncertainty In Deep Learning Workshop (UDL AUG. 2018

    What does the mind learn? A comparison of human and machine learning representations

    Get PDF
    We present a brief review of modern machine learning techniques and their use in models of human mental representations, detailing three notable branches: spatial methods, logical methods and artificial neural networks. Each of these branches contain an extensive set of systems, and demonstrate accurate emulations of human learning of categories, concepts and language, despite substantial differences in operation. We suggest that continued applications will allow cognitive researchers the ability to model the complex real-world problems where machine learning has recently been successful, providing more complete behavioural descriptions. This will however also require careful consideration of appropriate algorithmic constraints alongside these methods in order to find a combination which captures both the strengths and weaknesses of human cognition

    Revisiting Reweighted Wake-Sleep for Models with Stochastic Control Flow

    Full text link
    Stochastic control-flow models (SCFMs) are a class of generative models that involve branching on choices from discrete random variables. Amortized gradient-based learning of SCFMs is challenging as most approaches targeting discrete variables rely on their continuous relaxations---which can be intractable in SCFMs, as branching on relaxations requires evaluating all (exponentially many) branching paths. Tractable alternatives mainly combine REINFORCE with complex control-variate schemes to improve the variance of naive estimators. Here, we revisit the reweighted wake-sleep (RWS) (Bornschein and Bengio, 2015) algorithm, and through extensive evaluations, show that it outperforms current state-of-the-art methods in learning SCFMs. Further, in contrast to the importance weighted autoencoder, we observe that RWS learns better models and inference networks with increasing numbers of particles. Our results suggest that RWS is a competitive, often preferable, alternative for learning SCFMs.Comment: Tuan Anh Le and Adam R. Kosiorek contributed equally; accepted to Uncertainty in Artificial Intelligence 201
    corecore