38 research outputs found

    Exploring the conceptual universe.

    No full text
    <p>Humans can learn to organize many kinds of domains into categories, including real-world domains such as kinsfolk and synthetic domains such as sets of geometric figures that vary along several dimensions. Psychologists have studied many individual domains in detail, but there have been few attempts to characterize or explore the full space of possibilities. This article provides a formal characterization that takes objects, features, and relations as primitives and specifies conceptual domains by combining these primitives in different ways. Explaining how humans are able to learn concepts within all of these domains is a challenge for computational models, but I argue that this challenge can be met by models that rely on a compositional representation language such as predicate logic. The article presents such a model and demonstrates that it accounts well for human concept learning across 11 different domains.</p

    Quantification and the language of thought

    No full text
    Many researchers have suggested that the psychological complexity of a concept is related to the length of its representation in a language o f thought. As yet, however, there are few concrete proposals about the nature o f this language. This paper makes one such proposal: the language of thought allow s first order quantification (quantification over objects) more readily than second-order quantification (quantification over features). To support this proposal we present behavioral results from a concept learning study inspired by the work of Shepard, Hovland and Jenkins</p

    Inductive reasoning about chimeric creatures

    No full text
    Given one feature of a novel animal, humans readily make inferences about other features of the animal. For example, winged creatures often fly, and creatures that eat fish often live in the water. We explore the knowledge that supports these inferences and compare two approaches. The first approach propose s that humans rely on abstract representations of dependency relationships between features, and is formalized here as a graphical model. The second approach proposes that humans rely on specific knowledge of previously encountered animal s, and is formalized here as a family of exemplar models. We evaluate these models using a task where participants reason about chimeras, or animals with pairs o f features that have not previously been observed to co-occur. The results support t he hypothesis that humans rely on explicit representations of relationships bet ween features</p

    Inference and communication in the game of Password

    No full text
    Communication between a speaker and hearer will be most efficient when both parties make accurate inferences about the other. We study inference and communication in a television game called Password, where speakers must convey secret words to hearers by providing one-word clues. Our working hypothesis is that human communication is relatively efficient, and we use game show data to examine three predictions. First, we predict that speakers and hearers are both considerate , and that both take the other’s perspective into account. Second, we predict that speakers and hearers are calibrated , and that both make accurate assumptions about the strategy used by the other. Finally, we predict that speakers and hearers are collaborative , and that they tend to share the cognitive burden of communication equally. We find evidence in support of all three predictions, and demonstrate in addition that efficient communication tends to break down when speakers and hearers are placed under time pressure</p

    A probabilistic account of exemplar and category generation.

    No full text
    <p>People are capable of imagining and generating new category exemplars and categories. This ability has not been addressed by previous models of categorization, most of which focus on classifying category exemplars rather than generating them. We develop a formal account of exemplar and category generation which proposes that category knowledge is represented by probability distributions over exemplars and categories, and that new exemplars and categories are generated by sampling from these distributions. This sampling account of generation is evaluated in two pairs of behavioral experiments. In the first pair of experiments, participants were asked to generate novel exemplars of a category. In the second pair of experiments, participants were asked to generate a novel category after observing exemplars from several related categories. The results suggest that generation is influenced by both structural and distributional properties of the observed categories, and we argue that our data are better explained by the sampling account than by several alternative approaches.</p

    A taxonomy of inductive problems.

    No full text
    <p>Inductive inferences about objects, features, categories, and relations have been studied for many years, but there are few attempts to chart the range of inductive problems that humans are able to solve. We present a taxonomy of inductive problems that helps to clarify the relationships between familiar inductive problems such as generalization, categorization, and identification, and that introduces new inductive problems for psychological investigation. Our taxonomy is founded on the idea that semantic knowledge is organized into systems of objects, features, categories, and relations, and we attempt to characterize all of the inductive problems that can arise when these systems are partially observed. Recent studies have begun to address some of the new problems in our taxonomy, and future work should aim to develop unified theories of inductive reasoning that explain how people solve all of the problems in the taxonomy.</p

    An ideal observer model of infant object perception

    No full text
    Before the age of 4 months, infants make inductive inference s about the motions of physical objects. Developmental psychologists have provided verbal accounts of the knowledge that supports these inferences, but often these accounts focus on categorical rather than probabilistic principles. We propose that infant object perception is guided in part by probabilistic principles like persistence : things tend to remain the same, and when they change they do so gradually. To illustrate this idea we develop an ideal observer model that incorporates probabilistic principles of rigidity and inertia. Like previous researchers, we suggest that rigid motions are expected from an early age, but we challenge the previous claim that the inertia principle is relatively slow to develop [1]. We support these arguments by modeling several experiments from the developmental literature</p

    Learning Deterministic Causal Networks from Observational Data

    No full text
    <p>Previous work suggests that humans find it difficult to learn the structure of causal systems given observational data alone. We show that structure learning is successful when the causal systems in question are consistent with people’s expectations that causal relationships are deterministic and that each pattern of observations has a single underlying cause. Our data are well explained by a Bayesian model that incorporates a preference for symmetric structures and a preference for structures that make the observed data not only possible but likely</p

    Category Generation

    No full text
    People exhibit the ability to imagine new category instance s and new categories, with examples ranging from everyday activities like cooking to scientific discovery. This ability , which we call category generation , is not addressed by standard models of category learning, which focus on classifying instances rather than generating them. We develop a probabilistic ac- count of category generation and evaluate it using two behavioral experiments. Our results confirm that people find it natural to generate new category instances and suggest that our model accounts well for this ability.</p

    Decision factors that support preference learning

    No full text
    People routinely draw inferences about others’ preferences by observing their decisions. We study these inferences by characterizing a space of simple observed decisions. Previous work on attribution theory has identified several factors that predict whether a given decision provides strong evidence for an underlying preference. We identify one additional factor and show that a simple probabilistic model captures all of these factors. The model goes beyond verbal formulations of attribution theory by generating quantitative predictions about the full set of decisions that we consider. We test some of these predictions in two experiments: one with decisions involving positive effects and one with decisions involving negative effects. The second experiment confirms that inferences vary in systematic ways when positive effects are replaced by negative effects.</p
    corecore