43 research outputs found

    Reasoning about geography

    Get PDF
    To understand the nature and etiology of biases in geographical judgments, the authors asked people to estimate latitudes (Experiments 1 and 2) and longitudes (Experiments 3 and 4) of cities throughout the Old and New Worlds. They also examined how people's biased geographical judgments change after they receive accurate information ( "seeds " ) about actual locations. Location profiles constructed from the pre- and postseeding location estimates conveyed detailed information about the representations underlying geography knowledge, including the subjective positioning and subregionalization of regions within continents; differential seeding effects revealed between-region dependencies. The findings implicate an important role for conceptual knowledge and plausible-reasoning processes in tasks that use subjective geographical information. Geographical units like cities, provinces, countries, and continents are almost always irregular in shape and area; in their orientation relative to the cardinal points of the compass; and in their alignment relative to adjacent geographical units. Yet they also fit into a simple hierarchica

    Penetrating the geometric module: Catalyzing children's use of landmarks.

    Full text link

    The contribution of nonrigid motion and shape information to object perception in pigeons and humans

    Get PDF
    The ability to perceive and recognize objects is essential to many animals, including humans. Until recently, models of object recognition have primarily focused on static cues, such as shape, but more recent research is beginning to show that motion plays an important role in object perception. Most studies have focused on rigid motion, a type of motion most often associated with inanimate objects. In contrast, nonrigid motion is often associated with biological motion and is therefore ecologically important to visually dependent animals. In this study, we examined the relative contribution of nonrigid motion and shape to object perception in humans and pigeons, two species that rely extensively on vision. Using a parametric morphing technique to systematically vary nonrigid motion and three-dimensional shape information, we found that both humans and pigeons were able to rely solely on either shape or nonrigid motion information to identify complex objects when one of the two cues was degraded. Humans and pigeons also showed similar 80% accuracy thresholds when the information from both shape and motion cues were degraded. We argue that the use of nonrigid motion for object perception is evolutionarily important and should be considered in general theories of vision at least with respect to visually sophisticated animals

    Comparative cognition of object recognition

    No full text
    Object recognition is fundamental in the lives of most animals. The authors review research comparing object recognition in pigeons and humans. One series of studies investigated recognition of previously learned objects seen in novel depth rotations, including the influence of a single distinctive object part and whether the novel view was close to two or only one of the training views. Another series of studies investigated whether recognition of directly viewed objects differs from recognition of objects viewed in pictures. The final series of studies investigated the role of motion in object recognition. The authors review similarities and differences in object recognition between humans and pigeons. They also discuss future directions for comparative investigations of object recognition

    Updating geographical knowledge: Principles of coherence and inertia

    No full text
    In 2 experiments, the authors investigated how representations of global geography are updated when people learn new location information about individual cities. Participants estimated the latitude of cities in North America (Experiment 1) and in the Old and New Worlds (Experiment 2). After making their first estimates, participants were given information about the latitudes of 2 cities and asked to make a second set of estimates. Both the first and second estimates revealed evidence for psychologically distinct geographical subregions that were coordinated, in an ordinal sense, across the Atlantic Ocean. Further, the second estimates were affected by the nature of the physical adjacency between regions (e.g., the southern U.S. and Mexico) and by accurate location information about distant, but coordinated, subregions (e.g., the southern U.S. and Mediterranean Europe). The data provide support for a framework for making geographical estimates in which people strike a balance between 2 principles: the need to keep their knowledge base coherent, and the inertial tendency to resist changing the knowledge base unless it is necessary to maintain coherence. People acquire knowledge about the world across the lifespan. This simple fact implies that new knowledge is acquired in the context of prior knowledge and that the content, and perhaps th

    Using Naming Time to Evaluate Quality Predictors for Model Simplification

    No full text
    Model simplification researchers require quality heuristics to guide simplification, and quality predictors to allow comparison of different simplification algorithms. However, there has been little evaluation of these heuristics or predictors. We present an evaluation of quality predictors. Our standard of comparison is naming time, a well established measure of recognition from cognitive psychology. Thirty participants named models of familiar objects at three levels of simplification. Results confirm that naming time is sensitive to model simplification. Correlations indicate that view-dependent image quality predictors are most effective for drastic simplifications, while view-independent three-dimensional predictors are better for more moderate simplifications. Keywords Model simplification, simplification metrics, image quality, naming time, human vision. INTRODUCTION As the number of methods available for constructing or capturing three dimensional (3D) polygonal models prol..

    Measuring and Predicting Visual Fidelity

    No full text
    This paper is a study of techniques for measuring and predicting visual fidelity. As visual stimuli we use polygonal models, and vary their fidelity with two different model simplification algorithms. We also group the stimuli into two object types: animals and man made artifacts. We examine three different experimental techniques for measuring these fidelity changes: naming times, ratings, and preferences. All the measures were sensitive to the type of simplification and level of simplification. However, the measures differed from one another in their response to object type. We also examine several automatic techniques for predicting these experimental measures, including techniques based on images and on the models themselves. Automatic measures of fidelity were successful at predicting experimental ratings, less successful at predicting preferences, and largely failures at predicting naming times. We conclude with suggestions for use and improvement of the experimental and automatic measures of visual fidelity
    corecore