86 research outputs found

    Updating displays after imagined object and viewer rotations.

    Get PDF

    The Influence of Spatial Reference Frames on Imagined Object-and vVewer Rotations

    Get PDF
    The human visual system can represent an object\u27s spatial structure with respect to multiple frames of reference. It can also utilize multiple reference frames to mentally transform such representations. Recent studies have shown that performance on some mental transformations is not equivalent: Imagined object rotations tend to be more difficult than imagined viewer rotations. We reviewed several related research domains to understand this discrepancy in terms of the different reference frames associated with each imagined movement. An examination of the mental rotation literature revealed that observers\u27 difficulties in predicting an object\u27s rotational outcome may stem from a general deficit with imagining the cohesive rotation of the object\u27s intrinsic frame. Such judgments are thus more reliant on supplementary information provided by other frames, such as the environmental frame. In contrast, as assessed in motor imagery and other studies, imagined rotations of the viewer\u27s relative frame are performed cohesively and are thus mostly immune to effects of other frames. © 1999 Elsevier Science B.V. All rights reserved

    Imagining Physically Impossible Self-Rotations: Geometry is More Important than Gravity

    Get PDF
    Previous studies found that it is easier for observers to spatially update displays during imagined self-rotation versus array rotation. The present study examined whether either the physics of gravity or the geometric relationship between the viewer and array guided this self-rotation advantage. Experiments 1-3 preserved a real or imagined orthogonal relationship between the viewer and the array, requiring a rotation in the observer\u27s transverse plane. Despite imagined self-rotations that defied gravity, a viewer advantage remained. Without this orthogonal relationship (Experiment 4), the viewer advantage was lost. We suggest that efficient transformation of the egocentric reference frame relies on the representation of body-environment relations that allow rotation around the observer\u27s principal axis. This efficiency persists across different and conflicting physical and imagined postures. Copyright © 2001 Elsevier Science B.V

    Updating Displays after Imagined Object and Viewer Rotations

    Get PDF
    Six experiments compared spatial updating of an array after imagined rotations of the array versus viewer. Participants responded faster and made fewer errors in viewer tasks than in array tasks while positioned outside (Experiment 1) or inside (Experiment 2) the array. An apparent array advantage for updating objects rather than locations was attributable to participants imagining translations of single objects rather than rotations of the array (Experiment 3). Superior viewer performance persisted when the array was reduced to 1 object (Experiment 4); however, an object with a familiar configuration improved object performance somewhat (Experiment 5). Object performance reached near-viewer levels when rotations included haptic information for the turning object. The researchers discuss these findings in terms of the relative differences in which the human cognitive system transforms the spatial reference frames corresponding to each imagined rotation

    Perspective taking:building a neurocognitive framework for integrating the "social" and the "spatial"

    Get PDF
    From carrying a table to pointing at the moon, interacting with other people involves spatial awareness of one’s own body and the other’s body and viewpoint. In the past, social cognition has often focused on tasks like belief reasoning, which is abstracted away from spatial and bodily representations. There is also a strong tra-dition of work on spatial and object representation which does not consider social interactions. The 24 papers in this research topic represent the growing body of work which links the spatial and the social. The diversity of methods and approaches used here reveal that this is a vibrant and growing research area which can tell us more than the study of either topic in isolation. Online mental transformations of spatial representations are often believed to rely on action simulation and other “embodied” processing and three papers in the current research topic pro-vide new evidence for this process. Surtees and colleagues revea

    Spatial Updating of Virtual Displays During Self- and Display Rotation

    Get PDF
    In four experiments, we examined observers\u27 ability to locate objects in virtual displays while rotating to new perspectives. In Experiment 1, participants updated the locations of previously seen landmarks in a display while rotating themselves to new views (viewer task) or while rotating the display itself (display task). Updating was faster and more accurate in the viewer task than in the display task. In Experiment 2, we compared updating performance during active and passive self-rotation. Participants rotated themselves in a swivel chair (active task) or were rotated in the chair by the experimenter (passive task). A minimal advantage was found for the active task. In the final experiments, we tested similar manipulations with an asymmetrical display. In Experiment 3, updating during the viewer task was again superior to updating during the display task. In Experiment 4, we found no difference in updating between active and passive self-movement. These results are discussed in terms of differences in sources of extraretinal information available in each movement condition

    Relating spatial perspective taking to the perception of other's affordances: providing a foundation for predicting the future behavior of others

    Get PDF
    Understanding what another agent can see relates functionally to the understanding of what they can do. We propose that spatial perspective taking and perceiving other's affordances, while two separate spatial processes, together share the common social function of predicting the behavior of others. Perceiving the action capabilities of others allows for a common understanding of how agents may act together. The ability to take another's perspective focuses an understanding of action goals so that more precise understanding of intentions may result. This review presents an analysis of these complementary abilities, both in terms of the frames of reference and the proposed sensorimotor mechanisms involved. Together, we argue for the importance of reconsidering the role of basic spatial processes to explain more complex behaviors

    An fMRI sSudy of Imagined Self-Rotation

    Get PDF
    In the present study, functional magnetic resonance imaging was used to examine the neural mechanisms involved in the imagined spatial transformation of one\u27s body. The task required subjects to update the position of one of four external objects from memory after they had performed an imagined self-rotation to a new position. Activation in the rotation condition was compared with that in a control condition in which subjects located the positions of objects without imagining a change in self-position. The results indicated similar networks of activation to other egocentric transformation tasks involving decisions about body parts. The most significant area of activation was in the left posterior parietal cortex. Other regions of activation common among several of the subjects were secondary visual, premotor, and frontal lobe regions. These results are discussed relative to motor and visual imagery processes as well as to the distinctions between the present task and other imagined egocentric transformation tasks

    Perception of Space in Virtual and Augmented Reality (Invited Talk)

    No full text
    Virtual and Augmented Reality (VR and AR) methods provide both opportunities and challenges for research and applications involving spatial cognition. The opportunities result from the ability to immerse a user in a realistic environment in which they can interact, while at the same time having the ability to control and manipulate environmental and body-based cues in ways that are difficult or impossible to do in the real world. The challenge comes from the notion that virtual environments will be most useful if they achieve high perceptual fidelity - that observers will perceive and act in the mediated environment as they would in the real world. Consider two approaches to the use of VR/AR for in cognitive science. The first is to serve applications. For this, I argue in many cases we need to achieve and measure perceptual fidelity. Specifically, perceiving sizes and distances similarly to the real world may be critical for applications in design or training where the accuracy in scale matters. The second approach is to use VR/AR to manipulate environment-body interactions in ways that test perception-action mechanisms. Our lab and collaborators take both of these approaches, as they often mutually inform each other. I will present two examples of this dual approach to the use of VR that take advantage of the body-based feedback available in immersive virtual environments, in adults and children. The study of children\u27s spatial cognition is an important new direction in VR research, now feasible with the emergence of head-mounted-display technologies that fit those with smaller heads. Immersive VR has great potential for education, specifically in advancing complex spatial thinking, but a foundational understanding of children\u27s perception and action must first be established. This is particularly important because children\u27s rapidly changing bodies likely lead to differences compared to adults in how they represent and use their bodies for perception, action, and spatial learning. Even with rapidly advancing VR technologies, one continuing challenge is how to accurately update one\u27s spatial position in a large virtual environment when real walking is constrained by limited physical space or tracking capabilities. In my first example, I will present research that compares different modes of locomotion that vary the extent of visual or body-based information for self-motion, and tests the ability of users to keep track of their positions during self-movement. Differences in adults and children suggest reliance on different cues for spatial updating. Research in space perception in VR suggests that viewers underestimate egocentric distances in VR as compared to the real world, although the new commodity-level head-mounted-displays have somewhat reduced this effect. In a second example, I will present research that examines the role of bodies in scaling the affordances of environmental spaces. We use judgments of action capabilities both to evaluate the perceptual fidelity of virtual environments and to test the role of visual body representations on these judgments. Finally, I will present extensions of the use of affordances to evaluate perceptual fidelity in VR to new possibilities with AR, in which virtual objects are embedded in the real world. This work demonstrates that augmented reality environments can be acted upon as the real world, but some differences exist that may be due to current technology limitations
    • …
    corecore