162 research outputs found

    Moving through language: a behavioural and linguistic analysis of spatial mental model construction

    Get PDF
    Over the past few decades, our understanding of the cognitive processes underpinning our navigational abilities has expanded considerably. Models have been constructed that attempt to explain various key aspects of our wayfinding abilities, from the selection of salient features in environments to the processes involved in updating our position with respect to those features during movement. However, there remain several key open questions. Much of the research in spatial cognition has investigated visuospatial performance on the basis of sensory input (predominantly vision, but also sound, hapsis, and kinaesthesia), and while language production has been the subject of extensive research in psycholinguistics and cognitive linguistics, many aspects of language encoding remain unexplored. The research presented in this thesis aimed to explore outstanding issues in spatial language processing, tying together conceptual ends from different fields that have the potential to greatly inform each other, but focused specifically on how landmark information and spatial reference frames are encoded in mental representations characterised by different spatial reference frames. The first five experiments introduce a paradigm in which subjects encode skeletal route descriptions containing egocentric (“left/right”) or allocentric (cardinal) relational terms, while they also intentionally maintain an imagined egocentric or allocentric viewpoint. By testing participants’ spatial knowledge either in an allocentric (Experiments 1-3) or in an egocentric task (Experiments 4 and 5) this research exploits the facilitation produced by encoding-test congruence to clarify the contribution of mental imagery during spatial language processing and spatial tasks. Additionally, Experiments 1-3 adopted an eye-tracking methodology to study the allocation of attention to landmarks in descriptions and sketch maps as a function of linguistic reference frame and imagined perspective, while also recording subjective self-reports of participants’ phenomenal experiences. Key findings include evidence that egocentric and allocentric relational terms may not map directly onto egocentric and allocentric imagined perspectives, calling into question a common assumptions of psycholinguistic studies of spatial language. A novel way to establish experimental control over mental representations is presented, together with evidence that specific eye gaze patterns on landmark words or landmark regions of maps can be diagnostic of different imagined spatial perspectives. Experiments 4 and 5 adopted the same key manipulations to the study of spatial updating and bearing estimation following encoding of short, aurally-presented route descriptions. By employing two different response modes in this triangle completion task, Experiments 4 and 5 attempted to address key issues of experimental control that may have caused the conflicting results found in the literature on spatial updating during mental navigation and visuospatial imagery. The impact of encoding manipulations and of differences in response modality on embodiment and task performance were explored. Experiments 6-8 subsequently attempted to determine the developmental trajectory for the ability to discriminate between navigationally salient and non-salient landmarks, and to translate spatial relations between different reference frames. In these developmental studies, children and young adolescents were presented with videos portraying journeys through virtual environments from an egocentric perspective, and tested their ability to translate the resulting representations in order to perform allocentric spatial tasks. No clear facilitation effect of decision-point landmarks was observed or any strong indication that salient navigational features are more strongly represented in memory within the age range we tested (four to 11 years of age). Possible reasons for this are discussed in light of the relevant literature and methodological differences. Globally, the results presented indicate a functional role of imagery during language processing, pointing to the importance of introspection and accurate task analyses when interpreting behavioural results. Additionally, the study of implicit measures of attention such as eye tracking measures has the potential to improve our understanding mental representations, and of how they mediate between perception, action, and language. Lastly, these results also suggest that synergy between seemingly distinct research areas may be key in better characterising the nature of mental imagery in its different forms, and that the phenomenology of imagery content will be an essential part of this and future research

    Moving through language: a behavioural and linguistic analysis of spatial mental model construction

    Get PDF
    Over the past few decades, our understanding of the cognitive processes underpinning our navigational abilities has expanded considerably. Models have been constructed that attempt to explain various key aspects of our wayfinding abilities, from the selection of salient features in environments to the processes involved in updating our position with respect to those features during movement. However, there remain several key open questions. Much of the research in spatial cognition has investigated visuospatial performance on the basis of sensory input (predominantly vision, but also sound, hapsis, and kinaesthesia), and while language production has been the subject of extensive research in psycholinguistics and cognitive linguistics, many aspects of language encoding remain unexplored. The research presented in this thesis aimed to explore outstanding issues in spatial language processing, tying together conceptual ends from different fields that have the potential to greatly inform each other, but focused specifically on how landmark information and spatial reference frames are encoded in mental representations characterised by different spatial reference frames. The first five experiments introduce a paradigm in which subjects encode skeletal route descriptions containing egocentric (“left/right”) or allocentric (cardinal) relational terms, while they also intentionally maintain an imagined egocentric or allocentric viewpoint. By testing participants’ spatial knowledge either in an allocentric (Experiments 1-3) or in an egocentric task (Experiments 4 and 5) this research exploits the facilitation produced by encoding-test congruence to clarify the contribution of mental imagery during spatial language processing and spatial tasks. Additionally, Experiments 1-3 adopted an eye-tracking methodology to study the allocation of attention to landmarks in descriptions and sketch maps as a function of linguistic reference frame and imagined perspective, while also recording subjective self-reports of participants’ phenomenal experiences. Key findings include evidence that egocentric and allocentric relational terms may not map directly onto egocentric and allocentric imagined perspectives, calling into question a common assumptions of psycholinguistic studies of spatial language. A novel way to establish experimental control over mental representations is presented, together with evidence that specific eye gaze patterns on landmark words or landmark regions of maps can be diagnostic of different imagined spatial perspectives. Experiments 4 and 5 adopted the same key manipulations to the study of spatial updating and bearing estimation following encoding of short, aurally-presented route descriptions. By employing two different response modes in this triangle completion task, Experiments 4 and 5 attempted to address key issues of experimental control that may have caused the conflicting results found in the literature on spatial updating during mental navigation and visuospatial imagery. The impact of encoding manipulations and of differences in response modality on embodiment and task performance were explored. Experiments 6-8 subsequently attempted to determine the developmental trajectory for the ability to discriminate between navigationally salient and non-salient landmarks, and to translate spatial relations between different reference frames. In these developmental studies, children and young adolescents were presented with videos portraying journeys through virtual environments from an egocentric perspective, and tested their ability to translate the resulting representations in order to perform allocentric spatial tasks. No clear facilitation effect of decision-point landmarks was observed or any strong indication that salient navigational features are more strongly represented in memory within the age range we tested (four to 11 years of age). Possible reasons for this are discussed in light of the relevant literature and methodological differences. Globally, the results presented indicate a functional role of imagery during language processing, pointing to the importance of introspection and accurate task analyses when interpreting behavioural results. Additionally, the study of implicit measures of attention such as eye tracking measures has the potential to improve our understanding mental representations, and of how they mediate between perception, action, and language. Lastly, these results also suggest that synergy between seemingly distinct research areas may be key in better characterising the nature of mental imagery in its different forms, and that the phenomenology of imagery content will be an essential part of this and future research

    Max-Planck-Institute for Psycholinguistics: Annual Report 2003

    Get PDF

    How language adapts to the environment: an evolutionary, experimental approach

    Get PDF
    The aim of this thesis is to investigate experimentally whether cross-linguistic variation in the structure of languages can be motivated by their external environment. It has been suggested that variation does not only result from cultural drift and language-internal mechanisms but also from social or even physical factors. However, from observational data and correlations between variables alone, it remains difficult to infer the exact underlying mechanisms. Here, I present a novel experimental approach for studying the relationship between language and environment under controlled laboratory conditions. I argue that to arrive at a causal understanding of linguistic adaptation, we can use a cultural evolutionary approach and simulate the emergence of linguistic structure with humans in the lab. This way, it can be tested which pressures shape linguistic features as they are used for communication and transmitted to new speakers. I focus primarily on cases where linguistic conventions emerge in referential communication games in direct face-to-face interaction. In these settings, I test whether specific conventions are more adaptive to solve the same problem under different conditions or affordances imposed by the environment. A series of silent-gesture experiments shows that systematicity (the design feature giving language its compositional power) is sensitive to the communicative environment: Dyads creating novel gestural communication systems to communicate pictorial referents are more likely to systematize traits and create categories that are functionally relevant in the given environment. Additionally, environmental features, such as the size of the meaning space and visibility of referents, affect the degree to which participants rely on systematic rather than simple holistic gestures. This ‘experimental semiotics’ approach thus models how environmental factors could motivate basic linguistic structure. However, for complex real-world phenomena, such as the hotly debated relationship between spatial language and environment, it is difficult to design simple experiments that isolate variables of interest but retain the necessary level of realism. It has been proposed that topography (e.g., landmarks like rivers, slopes) and sociocultural factors (e.g., bilingualism, subsistence style, population density) can affect whether speakers rely on an egocentric or geocentric Frame of Reference (FoR) to encode spatial relations, but it remains hard to disentangle the exact contribution of these variables to the cross-linguistic variation we observe. I tackle this issue with a novel paradigm: interactive Virtual Reality (VR) experiments that allow for an unprecedented combination of ecological validity and experimental control. In networked VR settings, participants are immersed in realistic settings such as a forest or a mountain slope. By having dyads solve spatial coordination games, I show that speakers of English, which is usually associated with an egocentric FoR, are less likely to use egocentric language (e.g., “the orb is to your left”) if there are strong environmental affordances that make geocentric language more viable (e.g., “the orb is uphill from you”). Further experiments address whether the cultural ‘success’ of egocentric left/right could be motivated by its applicability across environments. For this, I combine VR with the ‘experimental semiotics’ approach, where the game is solved via a novel visual communication channel. I show how the movement data in the 3D world can be correlated with invented signals to measure which FoR participants rely on. In contrast to the English data, I did not find an advantage for geocentric systems in the slope environment, and overwhelmingly egocentric systems emerged. I discuss how this could relate to task-specificity and native language background. More generally, I show how this new way of studying spatial language with interactive VR games can be used to test hypotheses about linguistic transmission and material culture that could help explain the origins of the egocentric FoR system, which is regarded a fairly recent cultural innovation. Taken together, the thesis comprises several studies testing the relationship between linguistic and environmental variables. Additionally, VR is presented as a novel tool to study spatial language in controlled large-scale settings complementing more traditional fieldwork. More generally, I suggest that VR can be used to study the evolution of language in complex, multimodal settings without sacrificing experimental control

    Making a stronger case for comparative research to investigate the behavioral and neurological bases of three-dimensional navigation

    Get PDF
    The rich diversity of avian natural history provides exciting possibilities for comparative research aimed at understanding three-dimensional navigation. We propose some hypotheses relating differences in natural history to potential behavioral and neurological adaptations possessed by contrasting bird species. This comparative approach may offer unique insights into some of the important questions raised by Jeffery et al

    Making a stronger case for comparative research to investigate the behavioral and neurological bases of three-dimensional navigation

    Get PDF
    The rich diversity of avian natural history provides exciting possibilities for comparative research aimed at understanding three-dimensional navigation. We propose some hypotheses relating differences in natural history to potential behavioral and neurological adaptations possessed by contrasting bird species. This comparative approach may offer unique insights into some of the important questions raised by Jeffery et al

    The Aha! Experience of Spatial Reorientation

    Get PDF
    The experience of spatial re-orientation is investigated as an instance of the wellknown phenomenon of the Aha! moment. The research question is: What are the visuospatial conditions that are most likely to trigger the spatial Aha! experience? The literature suggests that spatial re-orientation relies mainly on the geometry of the environment and a visibility graph analysis is used to quantify the visuospatial information. Theories from environmental psychology point towards two hypotheses. The Aha! experience may be triggered by a change in the amount of visual information, described by the isovist properties of area and revelation, or by a change in the complexity of the visual information associated with the isovist properties of clustering coefficient and visual control. Data from participants’ exploratory behaviour and EEG recordings are collected during wayfinding in virtual reality urban environments. Two types of events are of interest here: (a) sudden changes of the visuospatial information preceding subjects' response to investigate changes in EEG power; and (b) participants brain dynamics (Aha! effect) just before the response to examine differences in isovist values at this location. Research on insights, time-frequency analysis of the P3 component and findings from navigation and orientation studies suggest that the spatial Aha! experience may be reflected by: a parietal alpha power decrease associated with the switch of the representation and a frontocentral theta increase indexing spatial processing during decision-making. Single-trial time-frequency analysis is used to classify trials into two conditions based on the alpha/theta power differences between a 3s time-period before participants’ response and a time-period of equal duration before that. Behavioural results show that participants are more likely to respond at locations with low values of clustering coefficient and high values of visual control. The EEG analysis suggests that the alpha decrease/theta increase condition occurs at locations with significantly lower values of clustering coefficient and higher values of visual control. Small and large decreases in clustering coefficient, just before the response, are associated with significant differences in delta/theta power. The values of area and revelation do not show significant differences. Both behavioural and EEG results suggest that the Aha! experience of re-orientation is more likely to be triggered by a change in the complexity of the visual-spatial environment rather than a change in the amount, as measured by the relevant isovist properties

    Sonic Interactions in Virtual Environments

    Get PDF

    Sonic interactions in virtual environments

    Get PDF
    This book tackles the design of 3D spatial interactions in an audio-centered and audio-first perspective, providing the fundamental notions related to the creation and evaluation of immersive sonic experiences. The key elements that enhance the sensation of place in a virtual environment (VE) are: Immersive audio: the computational aspects of the acoustical-space properties of Virutal Reality (VR) technologies Sonic interaction: the human-computer interplay through auditory feedback in VE VR systems: naturally support multimodal integration, impacting different application domains Sonic Interactions in Virtual Environments will feature state-of-the-art research on real-time auralization, sonic interaction design in VR, quality of the experience in multimodal scenarios, and applications. Contributors and editors include interdisciplinary experts from the fields of computer science, engineering, acoustics, psychology, design, humanities, and beyond. Their mission is to shape an emerging new field of study at the intersection of sonic interaction design and immersive media, embracing an archipelago of existing research spread in different audio communities and to increase among the VR communities, researchers, and practitioners, the awareness of the importance of sonic elements when designing immersive environments
    • 

    corecore