21 research outputs found

    Cross-Modal Object Recognition Is Viewpoint-Independent

    Get PDF
    BACKGROUND: Previous research suggests that visual and haptic object recognition are viewpoint-dependent both within- and cross-modally. However, this conclusion may not be generally valid as it was reached using objects oriented along their extended y-axis, resulting in differential surface processing in vision and touch. In the present study, we removed this differential by presenting objects along the z-axis, thus making all object surfaces more equally available to vision and touch. METHODOLOGY/PRINCIPAL FINDINGS: Participants studied previously unfamiliar objects, in groups of four, using either vision or touch. Subsequently, they performed a four-alternative forced-choice object identification task with the studied objects presented in both unrotated and rotated (180 degrees about the x-, y-, and z-axes) orientations. Rotation impaired within-modal recognition accuracy in both vision and touch, but not cross-modal recognition accuracy. Within-modally, visual recognition accuracy was reduced by rotation about the x- and y-axes more than the z-axis, whilst haptic recognition was equally affected by rotation about all three axes. Cross-modal (but not within-modal) accuracy correlated with spatial (but not object) imagery scores. CONCLUSIONS/SIGNIFICANCE: The viewpoint-independence of cross-modal object identification points to its mediation by a high-level abstract representation. The correlation between spatial imagery scores and cross-modal performance suggest that construction of this high-level representation is linked to the ability to perform spatial transformations. Within-modal viewpoint-dependence appears to have a different basis in vision than in touch, possibly due to surface occlusion being important in vision but not touch

    Representing 3D Space in Working Memory: Spatial Images from Vision, Hearing, Touch, and Language

    Get PDF
    The chapter deals with a form of transient spatial representation referred to as a spatial image. Like a percept, it is externalized, scaled to the environment, and can appear in any direction about the observer. It transcends the concept of modality, as it can be based on inputs from the three spatial senses, from language, and from long-term memory. Evidence is presented that supports each of the claimed properties of the spatial image, showing that it is quite different from a visual image. Much of the evidence presented is based on spatial updating. A major concern is whether spatial images from different input modalities are functionally equivalent— that once instantiated in working memory, the spatial images from different modalities have the same functional characteristics with respect to subsequent processing, such as that involved in spatial updating. Going further, the research provides some evidence that spatial images are amodal (i.e., do not retain modality-specific features)

    Object imagery and object identification: Object imagers are better at identifying spatially-filtered visual objects

    No full text
    Object imagery refers to the ability to construct pictorial images of objects. Individuals with high object imagery (high-OI) produce more vivid mental images than individuals with low object imagery (low-OI), and they encode and process both mental images and visual stimuli in a more global and holistic way. In the present study, we investigated whether and how level of object imagery may affect the way in which individuals identify visual objects. High-OI and low-OI participants were asked to perform a visual identification task with spatially-filtered pictures of real objects. Each picture was presented at nine levels of filtering, starting from the most blurred (level 1: only low spatial frequencies-global configuration) and gradually adding high spatial frequencies up to the complete version (level 9: global configuration plus local and internal details). Our data showed that high-OI participants identified stimuli at a lower level of filtering than participants with low-OI, indicating that they were better able than low-OI participants to identify visual objects at lower spatial frequencies. Implications of the results and future developments are discussed

    Individual differences in object versus spatial imagery: from neural correlates to real-world applications

    No full text
    This chapter focuses on individual differences in object and spatial–visual imagery both from theoretical and applied perspectives. While object imagery refers to representations of the literal appearances of individual objects and scenes in terms of their shape, color, and texture, spatial imagery refers to representations of the spatial relations among objects, locations of objects in space, movements of objects and their parts, and other complex spatial transformations. First, we review cognitive neuroscience and psychology research regarding the dissociation between object and spatial–visual imagery. Next, we discuss evidence on how this dissociation extends to individual differences in object and spatial imagery, followed by a discussion showing that individual differences in object and spatial imagery follow different developmental courses. After that we focus on cognitive and educational research that provides ecological validation of the object–spatial distinction in individual differences—in particular, on the relationship of object and spatial–visual abilities to mathematics and science problem solving and then to object–spatial imagery differences between members of different professions. Finally, we discuss applications of the object–spatial dissociation in imagery for applied fields, such as personnel selection, training, and education
    corecore