67 research outputs found

    Perspective taking:building a neurocognitive framework for integrating the "social" and the "spatial"

    Get PDF
    From carrying a table to pointing at the moon, interacting with other people involves spatial awareness of one’s own body and the other’s body and viewpoint. In the past, social cognition has often focused on tasks like belief reasoning, which is abstracted away from spatial and bodily representations. There is also a strong tra-dition of work on spatial and object representation which does not consider social interactions. The 24 papers in this research topic represent the growing body of work which links the spatial and the social. The diversity of methods and approaches used here reveal that this is a vibrant and growing research area which can tell us more than the study of either topic in isolation. Online mental transformations of spatial representations are often believed to rely on action simulation and other “embodied” processing and three papers in the current research topic pro-vide new evidence for this process. Surtees and colleagues revea

    Spatial Updating of Virtual Displays During Self- and Display Rotation

    Get PDF
    In four experiments, we examined observers\u27 ability to locate objects in virtual displays while rotating to new perspectives. In Experiment 1, participants updated the locations of previously seen landmarks in a display while rotating themselves to new views (viewer task) or while rotating the display itself (display task). Updating was faster and more accurate in the viewer task than in the display task. In Experiment 2, we compared updating performance during active and passive self-rotation. Participants rotated themselves in a swivel chair (active task) or were rotated in the chair by the experimenter (passive task). A minimal advantage was found for the active task. In the final experiments, we tested similar manipulations with an asymmetrical display. In Experiment 3, updating during the viewer task was again superior to updating during the display task. In Experiment 4, we found no difference in updating between active and passive self-movement. These results are discussed in terms of differences in sources of extraretinal information available in each movement condition

    Relating spatial perspective taking to the perception of other's affordances: providing a foundation for predicting the future behavior of others

    Get PDF
    Understanding what another agent can see relates functionally to the understanding of what they can do. We propose that spatial perspective taking and perceiving other's affordances, while two separate spatial processes, together share the common social function of predicting the behavior of others. Perceiving the action capabilities of others allows for a common understanding of how agents may act together. The ability to take another's perspective focuses an understanding of action goals so that more precise understanding of intentions may result. This review presents an analysis of these complementary abilities, both in terms of the frames of reference and the proposed sensorimotor mechanisms involved. Together, we argue for the importance of reconsidering the role of basic spatial processes to explain more complex behaviors

    Perception of Space in Virtual and Augmented Reality (Invited Talk)

    No full text
    Virtual and Augmented Reality (VR and AR) methods provide both opportunities and challenges for research and applications involving spatial cognition. The opportunities result from the ability to immerse a user in a realistic environment in which they can interact, while at the same time having the ability to control and manipulate environmental and body-based cues in ways that are difficult or impossible to do in the real world. The challenge comes from the notion that virtual environments will be most useful if they achieve high perceptual fidelity - that observers will perceive and act in the mediated environment as they would in the real world. Consider two approaches to the use of VR/AR for in cognitive science. The first is to serve applications. For this, I argue in many cases we need to achieve and measure perceptual fidelity. Specifically, perceiving sizes and distances similarly to the real world may be critical for applications in design or training where the accuracy in scale matters. The second approach is to use VR/AR to manipulate environment-body interactions in ways that test perception-action mechanisms. Our lab and collaborators take both of these approaches, as they often mutually inform each other. I will present two examples of this dual approach to the use of VR that take advantage of the body-based feedback available in immersive virtual environments, in adults and children. The study of children\u27s spatial cognition is an important new direction in VR research, now feasible with the emergence of head-mounted-display technologies that fit those with smaller heads. Immersive VR has great potential for education, specifically in advancing complex spatial thinking, but a foundational understanding of children\u27s perception and action must first be established. This is particularly important because children\u27s rapidly changing bodies likely lead to differences compared to adults in how they represent and use their bodies for perception, action, and spatial learning. Even with rapidly advancing VR technologies, one continuing challenge is how to accurately update one\u27s spatial position in a large virtual environment when real walking is constrained by limited physical space or tracking capabilities. In my first example, I will present research that compares different modes of locomotion that vary the extent of visual or body-based information for self-motion, and tests the ability of users to keep track of their positions during self-movement. Differences in adults and children suggest reliance on different cues for spatial updating. Research in space perception in VR suggests that viewers underestimate egocentric distances in VR as compared to the real world, although the new commodity-level head-mounted-displays have somewhat reduced this effect. In a second example, I will present research that examines the role of bodies in scaling the affordances of environmental spaces. We use judgments of action capabilities both to evaluate the perceptual fidelity of virtual environments and to test the role of visual body representations on these judgments. Finally, I will present extensions of the use of affordances to evaluate perceptual fidelity in VR to new possibilities with AR, in which virtual objects are embedded in the real world. This work demonstrates that augmented reality environments can be acted upon as the real world, but some differences exist that may be due to current technology limitations

    Perspective Taking: Building a neurocognitive framework for integrating the "social" and the "spatial"

    No full text
    Background: Interacting with other people involves spatial awareness of one’s own body and the other’s body and viewpoint. In the past, social cognition has focused largely on belief reasoning, which is abstracted away from spatial and bodily representations, while there is a strong tradition of work on spatial and object representation which does not consider social interactions. These two domains have flourished independently. A small but growing body of research examines how awareness of space and body relates to the ability to interpret and interact with others. This also builds on the growing awareness that many cognitive processes are embodied, which could be of relevance for the integration of the social and spatial domains: Online mental transformations of spatial representations have been shown to rely on simulated body movements and various aspects of social interaction have been related to the simulation of a conspecific’s behaviour within the observer’s bodily repertoire. Both dimensions of embodied transformations or mappings seem to serve the purpose of establishing alignment between the observer and a target. In spatial cognition research the target is spatially defined as a particular viewpoint or frame of reference (FOR), yet, in social interaction research another viewpoint is occupied by another’s mind, which crucially requires perspective taking in the sense of considering what another person experiences from a different viewpoint. Perspective taking has been studied in different ways within developmental psychology, cognitive psychology, psycholinguistics, neuropsychology and cognitive neuroscience over the last few decades, yet, integrative approaches for channelling all information into a unified account of perspective taking and viewpoint transformations have not been presented so far. Aims: This Research Topic aims to bring together the social and the spatial, and to highlight findings and methods which can unify research across areas. In particular, the topic aims to advance our current theories and set the stage for future developments of the field by clarifying and linking theoretical concepts across disciplines. Scope: The focus of this Research Topic is on the SPATIAL and the SOCIAL, and we anticipate that all submissions will touch on both aspects and will explicitly attempt to bridge conceptual gaps. Social questions could include questions of how people judge another person’s viewpoint or spatial capacities, or how they imagine themselves from different points of view. Spatial questions could include consideration of different physical configurations of the body and the arrangement of different viewpoints, including mental rotation of objects or viewpoints that have social relevance. Questions could also relate to how individual differences (in personality, sex, development, culture, species etc.) influence or determine social and spatial perspective judgements. Many different methods can be used to explore perspective taking, including mental chronometry, behavioural tasks, EEG/MEG and fMRI, child development, neuropsychological patients, virtual reality and more. Bringing together results and approaches from these different domains is a key aim of this Research Topic. We welcome submissions of experimental papers, reviews and theory papers which cover these topics

    Effects of ensemble and summary displays on interpretations of geospatial uncertainty data

    No full text
    Abstract Ensemble and summary displays are two widely used methods to represent visual-spatial uncertainty; however, there is disagreement about which is the most effective technique to communicate uncertainty to the general public. Visualization scientists create ensemble displays by plotting multiple data points on the same Cartesian coordinate plane. Despite their use in scientific practice, it is more common in public presentations to use visualizations of summary displays, which scientists create by plotting statistical parameters of the ensemble members. While prior work has demonstrated that viewers make different decisions when viewing summary and ensemble displays, it is unclear what components of the displays lead to diverging judgments. This study aims to compare the salience of visual features – or visual elements that attract bottom-up attention – as one possible source of diverging judgments made with ensemble and summary displays in the context of hurricane track forecasts. We report that salient visual features of both ensemble and summary displays influence participant judgment. Specifically, we find that salient features of summary displays of geospatial uncertainty can be misunderstood as displaying size information. Further, salient features of ensemble displays evoke judgments that are indicative of accurate interpretations of the underlying probability distribution of the ensemble data. However, when participants use ensemble displays to make point-based judgments, they may overweight individual ensemble members in their decision-making process. We propose that ensemble displays are a promising alternative to summary displays in a geospatial context but that decisions about visualization methods should be informed by the viewer’s task
    • …
    corecore