28,803 research outputs found

    Resolving the Raven Paradox: Simple Random Sampling, Stratified Random Sampling, and Inference to the Best Explanation

    Get PDF
    Simple random sampling resolutions of the raven paradox relevantly diverge from scientific practice. We develop a stratified random sampling model, yielding a better fit and apparently rehabilitating simple random sampling as a legitimate idealization. However, neither accommodates a second concern, the objection from potential bias. We develop a third model that crucially invokes causal considerations, yielding a novel resolution that handles both concerns. This approach resembles Inference to the Best Explanation (IBE) and relates the generalization’s confirmation to confirmation of an associated law. We give it an objective Bayesian formalization and discuss the compatibility of Bayesianism and IBE

    Reinventing grounded theory: some questions about theory, ground and discovery

    Get PDF
    Grounded theory’s popularity persists after three decades of broad-ranging critique. In this article three problematic notions are discussed—‘theory,’ ‘ground’ and ‘discovery’—which linger in the continuing use and development of grounded theory procedures. It is argued that far from providing the epistemic security promised by grounded theory, these notions—embodied in continuing reinventions of grounded theory—constrain and distort qualitative inquiry, and that what is contrived is not in fact theory in any meaningful sense, that ‘ground’ is a misnomer when talking about interpretation and that what ultimately materializes following grounded theory procedures is less like discovery and more akin to invention. The procedures admittedly provide signposts for qualitative inquirers, but educational researchers should be wary, for the significance of interpretation, narrative and reflection can be undermined in the procedures of grounded theory

    Raising argument strength using negative evidence: A constraint on models of induction

    Get PDF
    Both intuitively, and according to similarity-based theories of induction, relevant evidence raises argument strength when it is positive and lowers it when it is negative. In three experiments, we tested the hypothesis that argument strength can actually increase when negative evidence is introduced. Two kinds of argument were compared through forced choice or sequential evaluation: single positive arguments (e.g., “Shostakovich’s music causes alpha waves in the brain; therefore, Bach’s music causes alpha waves in the brain”) and double mixed arguments (e.g., “Shostakovich’s music causes alpha waves in the brain, X’s music DOES NOT; therefore, Bach’s music causes alpha waves in the brain”). Negative evidence in the second premise lowered credence when it applied to an item X from the same subcategory (e.g., Haydn) and raised it when it applied to a different subcategory (e.g., AC/DC). The results constitute a new constraint on models of induction

    Extending and testing the bayesian theory of generalization

    Get PDF
    We introduce a tractable family of Bayesian generalization functions. The family extends the basic model proposed by Tenenbaum and Griffiths (2001), allowing richer variation in sampling assumptions and prior beliefs. We derive analytic expressions for these generalization functions, and provide an explicit model for experimental data. We then present an experiment that tests the basic model predictions within the core domain of the theory, namely tasks that require people to make inductive judgments about whether some property holds for novel items. Analysis of the results illustrates the importance of describing variations in people’s prior beliefs and assumptions about how items are sampled and of having an explicit model for the entire task.Daniel J. Navarro, Michael D. Lee, Matthew J. Dry and Benjamin Schult

    Adaptive Density Estimation for Generative Models

    Get PDF
    Unsupervised learning of generative models has seen tremendous progress over recent years, in particular due to generative adversarial networks (GANs), variational autoencoders, and flow-based models. GANs have dramatically improved sample quality, but suffer from two drawbacks: (i) they mode-drop, i.e., do not cover the full support of the train data, and (ii) they do not allow for likelihood evaluations on held-out data. In contrast, likelihood-based training encourages models to cover the full support of the train data, but yields poorer samples. These mutual shortcomings can in principle be addressed by training generative latent variable models in a hybrid adversarial-likelihood manner. However, we show that commonly made parametric assumptions create a conflict between them, making successful hybrid models non trivial. As a solution, we propose to use deep invertible transformations in the latent variable decoder. This approach allows for likelihood computations in image space, is more efficient than fully invertible models, and can take full advantage of adversarial training. We show that our model significantly improves over existing hybrid models: offering GAN-like samples, IS and FID scores that are competitive with fully adversarial models, and improved likelihood scores

    Acquiring Word-Meaning Mappings for Natural Language Interfaces

    Full text link
    This paper focuses on a system, WOLFIE (WOrd Learning From Interpreted Examples), that acquires a semantic lexicon from a corpus of sentences paired with semantic representations. The lexicon learned consists of phrases paired with meaning representations. WOLFIE is part of an integrated system that learns to transform sentences into representations such as logical database queries. Experimental results are presented demonstrating WOLFIE's ability to learn useful lexicons for a database interface in four different natural languages. The usefulness of the lexicons learned by WOLFIE are compared to those acquired by a similar system, with results favorable to WOLFIE. A second set of experiments demonstrates WOLFIE's ability to scale to larger and more difficult, albeit artificially generated, corpora. In natural language acquisition, it is difficult to gather the annotated data needed for supervised learning; however, unannotated data is fairly plentiful. Active learning methods attempt to select for annotation and training only the most informative examples, and therefore are potentially very useful in natural language applications. However, most results to date for active learning have only considered standard classification tasks. To reduce annotation effort while maintaining accuracy, we apply active learning to semantic lexicons. We show that active learning can significantly reduce the number of annotated examples required to achieve a given level of performance
    corecore