7 research outputs found
Spatial memory for vertical locations
Most studies on spatial memory refer to the horizontal plane, leaving an open question as to whether findings generalize to vertical spaces where gravity and the visual upright of our surrounding space are salient orientation cues. In three experiments, we examined which reference frame is used to organize memory for vertical locations: the one based on the body vertical, the visual-room vertical, or the direction of gravity. Participants judged interobject spatial relationships learned from a vertical layout in a virtual room. During learning and testing, we varied the orientation of the participantâs body (upright vs. lying sideways) and the visually presented room relative to gravity (e.g., rotated by 90° along the frontal plane). Across all experiments, participants made quicker or more accurate judgments when the room was oriented in the same way as during learning with respect to their body, irrespective of their orientations relative to gravity. This suggests that participants employed an egocentric body-based reference frame for representing vertical object locations. Our study also revealed an effect of bodyâgravity alignment during testing. Participants recalled spatial relations more accurately when upright, regardless of the body and visual-room orientation during learning. This finding is consistent with a hypothesis of selection conflict between different reference frames. Overall, our results suggest that a body-based reference frame is preferred over salient allocentric reference frames in memory for vertical locations perceived from a single view. Further, memory of vertical space seems to be tuned to work best in the default upright body orientation
No advantage for remembering horizontal over vertical spatial locations learned from a single viewpoint
Previous behavioral and neurophysiological research has shown better memory for horizontal than for vertical locations. In these studies, participants navigated toward these locations. In the present study we investigated whether the orientation of the spatial plane per se was responsible for this difference. We thus had participants learn locations visually from a single perspective and retrieve them from multiple viewpoints. In three experiments, participants studied colored tags on a horizontally or vertically oriented board within a virtual room and recalled these locations with different layout orientations (Exp. 1) or from different room-based perspectives (Exps. 2 and 3). All experiments revealed evidence for equal recall performance in horizontal and vertical memory. In addition, the patterns for recall from different test orientations were rather similar. Consequently, our results suggest that memory is qualitatively similar for both vertical and horizontal two-dimensional locations, given that these locations are learned from a single viewpoint. Thus, prior differences in spatial memory may have originated from the structure of the space or the fact that participants navigated through it. Additionally, the strong performance advantages for perspective shifts (Exps. 2 and 3) relative to layout rotations (Exp. 1) suggest that configurational judgments are not only based on memory of the relations between target objects, but also encompass the relations between target objects and the surrounding roomâfor example, in the form of a memorized view
Recommended from our members
The provenance of modal inference
People reason about possibilities routinely, and reasoners caninfer âmodalâ conclusions, i.e., conclusions that concern whatis possible or necessary, from premises that make no mentionof modality. For instance, given that Cullen was born in NewYork or Kentucky, it is intuitive to infer that itâs possible thatCullen was born in New York, and a recent set of studies onmodal reasoning bear out these intuitions (Hinterecker,Knauff, & Johnson-Laird, 2016). What explains the tendencyto make modal inferences? Conventional logic does not applyto modal reasoning, and so logicians invented manyalternative systems of modal logic to capture valid modalinferences. But, none of those systems can explain theinference above. We posit a novel theory based on the ideathat reasoners build mental models, i.e., iconic simulations ofpossibilities, when they reason about sentential connectivessuch as and, if, and or (Johnson-Laird, 2006). The theoryposits that reasoners represent a set of conjunctivepossibilities to capture the meanings of compound assertions.It is implemented in a new computational process model ofsentential reasoning that can draw modal conclusions fromnon-modal premises. We describe the theory andcomputational model, and show how its performance matchesreasonersâ inferences in two studies by Hinterecker et al.(2016). We conclude by discussing the model-based theory inlight of alternative accounts of reasoning
Recommended from our members
The influence of structural salience and verbalisation on finding the return path
Are some landmark positions at intersections better for finding a return path than others? This study investigated whether there is a variation in the influence of a landmark on performance and decision times when finding a return path depending on its position at an intersection. A variation of this influence is expected depending on the type of verbalisation of spatial directions used. First, participants learned a path either with direction specific (turn left at or turn right at) or direction unspecific material (turn into direction of or turn in the opposite direction of). In this path the positions of the landmarks were varied systematically. Secondly, participants had to find the return path of the learned route and their third task was to write down verbal route descriptions. An effect of the landmark position on finding the return path can be suggested, although it was barely insignificant, for direction specific and direction unspecific material. A significant influence on the accuracy of the information in the route descriptions depending on the position of a landmark and on the specificity of the spatial directions could be shown. The results are discussed in the context of current wayfinding and landmark research
Body-relative horizontal-vertical anisotropy in human representations of traveled distances
A growing number of studies investigated anisotropies in representations of horizontal and vertical spaces. In humans, compelling evidence for such anisotropies exists for representations of multi-floor buildings. In contrast, evidence regarding open spaces is indecisive. Our study aimed at further enhancing the understanding of horizontal and vertical spatial representations in open spaces utilizing a simple traveled distance estimation paradigm. Blindfolded participants were moved along various directions in the sagittal plane. Subsequently, participants passively reproduced the traveled distance from memory. Participants performed this task in an upright and in a 30° backward-pitch orientation. The accuracy of distance estimates in the upright orientation showed a horizontalâvertical anisotropy, with higher accuracy along the horizontal axis compared with the vertical axis. The backward-pitch orientation enabled us to investigate whether this anisotropy was body or earth-centered. The accuracy patterns of the upright condition were positively correlated with the body-relative (not the earth-relative) coordinate mapping of the backward-pitch condition, suggesting a body-centered anisotropy. Overall, this is consistent with findings on motion perception. It suggests that the distance estimation sub-process of path integration is subject to horizontalâvertical anisotropy. Based on the previous studies that showed isotropy in open spaces, we speculate that real physical self-movements or categorical versus isometric encoding are crucial factors for (an)isotropies in spatial representations
Recommended from our members
Reference Systems in Spatial Memory for Vertical Locations
Three experiments investigated the frame of reference used in memory to represent vertical spatial layouts perceiv-able from a single viewpoint. We tested for the selection of three different reference systems: the body orientation, the visualvertical of the surrounding room, and the direction of gravity. Participants learned and retrieved differently colored objects ona vertical board with body and room orientations varying in relation to gravity and each other systematically. Across all threeexperiments participants were quicker or more accurate in memory recall when they saw the vertical spatial layout in the sameorientation in relation to their body vertical as during learning, irrespective of the direction of gravity or visual room upright.These results indicate that spatial long-term memories for small-scale vertical relations are mainly defined in an egocentricreference system with respect to the body vertical despite the availability of alternative highly salient allocentric referencedirections