65 research outputs found

    A Vision-Based Perceptual Learning System for Autonomous Mobile Robot

    Get PDF

    Applications of Virtual Reality

    Get PDF
    Information Technology is growing rapidly. With the birth of high-resolution graphics, high-speed computing and user interaction devices Virtual Reality has emerged as a major new technology in the mid 90es, last century. Virtual Reality technology is currently used in a broad range of applications. The best known are games, movies, simulations, therapy. From a manufacturing standpoint, there are some attractive applications including training, education, collaborative work and learning. This book provides an up-to-date discussion of the current research in Virtual Reality and its applications. It describes the current Virtual Reality state-of-the-art and points out many areas where there is still work to be done. We have chosen certain areas to cover in this book, which we believe will have potential significant impact on Virtual Reality and its applications. This book provides a definitive resource for wide variety of people including academicians, designers, developers, educators, engineers, practitioners, researchers, and graduate students

    Moving through language: a behavioural and linguistic analysis of spatial mental model construction

    Get PDF
    Over the past few decades, our understanding of the cognitive processes underpinning our navigational abilities has expanded considerably. Models have been constructed that attempt to explain various key aspects of our wayfinding abilities, from the selection of salient features in environments to the processes involved in updating our position with respect to those features during movement. However, there remain several key open questions. Much of the research in spatial cognition has investigated visuospatial performance on the basis of sensory input (predominantly vision, but also sound, hapsis, and kinaesthesia), and while language production has been the subject of extensive research in psycholinguistics and cognitive linguistics, many aspects of language encoding remain unexplored. The research presented in this thesis aimed to explore outstanding issues in spatial language processing, tying together conceptual ends from different fields that have the potential to greatly inform each other, but focused specifically on how landmark information and spatial reference frames are encoded in mental representations characterised by different spatial reference frames. The first five experiments introduce a paradigm in which subjects encode skeletal route descriptions containing egocentric (“left/right”) or allocentric (cardinal) relational terms, while they also intentionally maintain an imagined egocentric or allocentric viewpoint. By testing participants’ spatial knowledge either in an allocentric (Experiments 1-3) or in an egocentric task (Experiments 4 and 5) this research exploits the facilitation produced by encoding-test congruence to clarify the contribution of mental imagery during spatial language processing and spatial tasks. Additionally, Experiments 1-3 adopted an eye-tracking methodology to study the allocation of attention to landmarks in descriptions and sketch maps as a function of linguistic reference frame and imagined perspective, while also recording subjective self-reports of participants’ phenomenal experiences. Key findings include evidence that egocentric and allocentric relational terms may not map directly onto egocentric and allocentric imagined perspectives, calling into question a common assumptions of psycholinguistic studies of spatial language. A novel way to establish experimental control over mental representations is presented, together with evidence that specific eye gaze patterns on landmark words or landmark regions of maps can be diagnostic of different imagined spatial perspectives. Experiments 4 and 5 adopted the same key manipulations to the study of spatial updating and bearing estimation following encoding of short, aurally-presented route descriptions. By employing two different response modes in this triangle completion task, Experiments 4 and 5 attempted to address key issues of experimental control that may have caused the conflicting results found in the literature on spatial updating during mental navigation and visuospatial imagery. The impact of encoding manipulations and of differences in response modality on embodiment and task performance were explored. Experiments 6-8 subsequently attempted to determine the developmental trajectory for the ability to discriminate between navigationally salient and non-salient landmarks, and to translate spatial relations between different reference frames. In these developmental studies, children and young adolescents were presented with videos portraying journeys through virtual environments from an egocentric perspective, and tested their ability to translate the resulting representations in order to perform allocentric spatial tasks. No clear facilitation effect of decision-point landmarks was observed or any strong indication that salient navigational features are more strongly represented in memory within the age range we tested (four to 11 years of age). Possible reasons for this are discussed in light of the relevant literature and methodological differences. Globally, the results presented indicate a functional role of imagery during language processing, pointing to the importance of introspection and accurate task analyses when interpreting behavioural results. Additionally, the study of implicit measures of attention such as eye tracking measures has the potential to improve our understanding mental representations, and of how they mediate between perception, action, and language. Lastly, these results also suggest that synergy between seemingly distinct research areas may be key in better characterising the nature of mental imagery in its different forms, and that the phenomenology of imagery content will be an essential part of this and future research

    A virtual object-location task for children: Gender and videogame experience influence navigation; age impacts memory and completion time

    Get PDF
    The use of virtual reality-based tasks for studying memory has increased considerably. Most of the studies that have looked at child population factors that influence performance on such tasks have been focused on cognitive variables. However, little attention has been paid to the impact of non-cognitive skills. In the present paper, we tested 52 typically-developing children aged 5-12 years in a virtual object-location task. The task assessed their spatial short-term memory for the location of three objects in a virtual city. The virtual task environment was presented using a 3D application consisting of a 120" stereoscopic screen and a gamepad interface. Measures of learning and displacement indicators in the virtual environment, 3D perception, satisfaction, and usability were obtained. We assessed the children's videogame experience, their visuospatial span, their ability to build blocks, and emotional and behavioral outcomes. The results indicate that learning improved with age. Significant effects on the speed of navigation were found favoring boys and those more experienced with videogames. Visuospatial skills correlated mainly with ability to recall object positions, but the correlation was weak. Longer paths were related with higher scores of withdrawal behavior, attention problems, and a lower visuospatial span. Aggressiveness and experience with the device used for interaction were related with faster navigation. However, the correlations indicated only weak associations among these variables

    Moving through language: a behavioural and linguistic analysis of spatial mental model construction

    Get PDF
    Over the past few decades, our understanding of the cognitive processes underpinning our navigational abilities has expanded considerably. Models have been constructed that attempt to explain various key aspects of our wayfinding abilities, from the selection of salient features in environments to the processes involved in updating our position with respect to those features during movement. However, there remain several key open questions. Much of the research in spatial cognition has investigated visuospatial performance on the basis of sensory input (predominantly vision, but also sound, hapsis, and kinaesthesia), and while language production has been the subject of extensive research in psycholinguistics and cognitive linguistics, many aspects of language encoding remain unexplored. The research presented in this thesis aimed to explore outstanding issues in spatial language processing, tying together conceptual ends from different fields that have the potential to greatly inform each other, but focused specifically on how landmark information and spatial reference frames are encoded in mental representations characterised by different spatial reference frames. The first five experiments introduce a paradigm in which subjects encode skeletal route descriptions containing egocentric (“left/right”) or allocentric (cardinal) relational terms, while they also intentionally maintain an imagined egocentric or allocentric viewpoint. By testing participants’ spatial knowledge either in an allocentric (Experiments 1-3) or in an egocentric task (Experiments 4 and 5) this research exploits the facilitation produced by encoding-test congruence to clarify the contribution of mental imagery during spatial language processing and spatial tasks. Additionally, Experiments 1-3 adopted an eye-tracking methodology to study the allocation of attention to landmarks in descriptions and sketch maps as a function of linguistic reference frame and imagined perspective, while also recording subjective self-reports of participants’ phenomenal experiences. Key findings include evidence that egocentric and allocentric relational terms may not map directly onto egocentric and allocentric imagined perspectives, calling into question a common assumptions of psycholinguistic studies of spatial language. A novel way to establish experimental control over mental representations is presented, together with evidence that specific eye gaze patterns on landmark words or landmark regions of maps can be diagnostic of different imagined spatial perspectives. Experiments 4 and 5 adopted the same key manipulations to the study of spatial updating and bearing estimation following encoding of short, aurally-presented route descriptions. By employing two different response modes in this triangle completion task, Experiments 4 and 5 attempted to address key issues of experimental control that may have caused the conflicting results found in the literature on spatial updating during mental navigation and visuospatial imagery. The impact of encoding manipulations and of differences in response modality on embodiment and task performance were explored. Experiments 6-8 subsequently attempted to determine the developmental trajectory for the ability to discriminate between navigationally salient and non-salient landmarks, and to translate spatial relations between different reference frames. In these developmental studies, children and young adolescents were presented with videos portraying journeys through virtual environments from an egocentric perspective, and tested their ability to translate the resulting representations in order to perform allocentric spatial tasks. No clear facilitation effect of decision-point landmarks was observed or any strong indication that salient navigational features are more strongly represented in memory within the age range we tested (four to 11 years of age). Possible reasons for this are discussed in light of the relevant literature and methodological differences. Globally, the results presented indicate a functional role of imagery during language processing, pointing to the importance of introspection and accurate task analyses when interpreting behavioural results. Additionally, the study of implicit measures of attention such as eye tracking measures has the potential to improve our understanding mental representations, and of how they mediate between perception, action, and language. Lastly, these results also suggest that synergy between seemingly distinct research areas may be key in better characterising the nature of mental imagery in its different forms, and that the phenomenology of imagery content will be an essential part of this and future research

    Gender differences in spatial ability within virtual reality

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Multimodal information processing and associative learning in the insect brain

    Get PDF
    The study of sensory systems in insects has a long-spanning history of almost an entire century. Olfaction, vision, and gustation are thoroughly researched in several robust insect models and new discoveries are made every day on the more elusive thermo- and mechano-sensory systems. Few specialized senses such as hygro- and magneto-reception are also identified in some insects. In light of recent advancements in the scientific investigation of insect behavior, it is not only important to study sensory modalities individually, but also as a combination of multimodal inputs. This is of particular significance, as a combinatorial approach to study sensory behaviors mimics the real-time environment of an insect with a wide spectrum of information available to it. As a fascinating field that is recently gaining new insight, multimodal integration in insects serves as a fundamental basis to understand complex insect behaviors including, but not limited to navigation, foraging, learning, and memory. In this review, we have summarized various studies that investigated sensory integration across modalities, with emphasis on three insect models (honeybees, ants and flies), their behaviors, and the corresponding neuronal underpinnings

    Spatial reasoning in early childhood

    Get PDF
    This document is about how children develop spatial reasoning in early childhood (birth to 7 years) and how practitioners working with young children can support this. Spatial reasoning is a vital and often overlooked aspect of mathematics. So this toolkit, which is informed by extensive review of research in this areas, will support practitioners to enhance children's early mathematical learning. For the full Spatial Reasoning toolkit: https://earlymaths.org/spatial-reasoning

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing

    Design and validation of a computational program for analysing mental maps: Aram mental map analyzer

    Get PDF
    Considering citizens’ perceptions of their living environment is very helpful in making the right decisions for city planners who intend to build a sustainable society. Mental map analyses are widely used in understanding the level of perception of individuals regarding the surrounding environment. The present study introduces Aram Mental Map Analyzer (AMMA), an open-source program, which allows researchers to use special features and new analytical methods to receive outputs in numerical data and analytical maps with greater accuracy and speed. AMMA performance is contingent upon two principles of accuracy and complexity, the accuracy of the program is measured by Accuracy Placed Landmarks (APL) and General Orientation (GO), which respectively analyses the landmark placement accuracy and the main route mapping accuracy. Also, the complexity section is examined through two analyses Cell Percentage (CP) and General Structure (GS), which calculates the complexity of citizens’ perception of space based on the criteria derived from previous studies. AMMA examines all the dimensions and features of the graphic maps and its outputs have a wide range of valid and differentiated information, which is tailored to the research and information subject matter that is required
    • …
    corecore