3,092 research outputs found

    Evaluation of Multi-Level Cognitive Maps for Supporting Between-Floor Spatial Behavior in Complex Indoor Environments

    Get PDF
    People often become disoriented when navigating in complex, multi-level buildings. To efficiently find destinations located on different floors, navigators must refer to a globally coherent mental representation of the multi-level environment, which is termed a multi-level cognitive map. However, there is a surprising dearth of research into underlying theories of why integrating multi-level spatial knowledge into a multi-level cognitive map is so challenging and error-prone for humans. This overarching problem is the core motivation of this dissertation. We address this vexing problem in a two-pronged approach combining study of both basic and applied research questions. Of theoretical interest, we investigate questions about how multi-level built environments are learned and structured in memory. The concept of multi-level cognitive maps and a framework of multi-level cognitive map development are provided. We then conducted a set of empirical experiments to evaluate the effects of several environmental factors on users’ development of multi-level cognitive maps. The findings of these studies provide important design guidelines that can be used by architects and help to better understand the research question of why people get lost in buildings. Related to application, we investigate questions about how to design user-friendly visualization interfaces that augment users’ capability to form multi-level cognitive maps. An important finding of this dissertation is that increasing visual access with an X-ray-like visualization interface is effective for overcoming the disadvantage of limited visual access in built environments and assists the development of multi-level cognitive maps. These findings provide important human-computer interaction (HCI) guidelines for visualization techniques to be used in future indoor navigation systems. In sum, this dissertation adopts an interdisciplinary approach, combining theories from the fields of spatial cognition, information visualization, and HCI, addressing a long-standing and ubiquitous problem faced by anyone who navigates indoors: why do people get lost inside multi-level buildings. Results provide both theoretical and applied levels of knowledge generation and explanation, as well as contribute to the growing field of real-time indoor navigation systems

    Locomotion in virtual reality in full space environments

    Get PDF
    Virtual Reality is a technology that allows the user to explore and interact with a virtual environment in real time as if they were there. It is used in various fields such as entertainment, education, and medicine due to its immersion and ability to represent reality. Still, there are problems such as virtual simulation sickness and lack of realism that make this technology less appealing. Locomotion in virtual environments is one of the main factors responsible for an immersive and enjoyable virtual reality experience. Several methods of locomotion have been proposed, however, these have flaws that end up negatively influencing the experience. This study compares natural locomotion in complete spaces with joystick locomotion and natural locomotion in impossible spaces through three tests in order to identify the best locomotion method in terms of immersion, realism, usability, spatial knowledge acquisition and level of virtual simulation sickness. The results show that natural locomotion is the method that most positively influences the experience when compared to the other locomotion methods.A Realidade Virual é uma tecnologia que permite ao utilizador explorar e interagir com um ambiente virtual em tempo real como se lá estivesse presente. E utilizada em diversas áreas como o entretenimento, educação e medicina devido à sua imersão e capacidade de representar a realidade. Ainda assim, existem problemas como o enjoo por simulação virtual e a falta de realismo que tornam esta tecnologia menos apelativa. A locomoção em ambientes virtuais é um dos principais fatores responsáveis por uma experiência em realidade virtual imersiva e agradável. Vários métodos de locomoção foram propostos, no entanto, estes têm falhas que acabam por influenciar negativamente a experiência. Este estudo compara a locomoção natural em espaços completos com a locomoção por joystick e a locomoção natural em espaços impossíveis através de três testes de forma a identificar qual o melhor método de locomoção a nível de imersão, realismo, usabilidade, aquisição de conhecimento espacial e nível de enjoo por simulação virtual. Os resultados mostram que a locomoção natural é o método que mais influencia positivamente a experiência quando comparado com os outros métodos de locomoção

    Remote Visual Observation of Real Places Through Virtual Reality Headsets

    Get PDF
    Virtual Reality has always represented a fascinating yet powerful opportunity that has attracted studies and technology developments, especially since the latest release on the market of powerful high-resolution and wide field-of-view VR headsets. While the great potential of such VR systems is common and accepted knowledge, issues remain related to how to design systems and setups capable of fully exploiting the latest hardware advances. The aim of the proposed research is to study and understand how to increase the perceived level of realism and sense of presence when remotely observing real places through VR headset displays. Hence, to produce a set of guidelines that give directions to system designers about how to optimize the display-camera setup to enhance performance, focusing on remote visual observation of real places. The outcome of this investigation represents unique knowledge that is believed to be very beneficial for better VR headset designs towards improved remote observation systems. To achieve the proposed goal, this thesis presents a thorough investigation of existing literature and previous researches, which is carried out systematically to identify the most important factors ruling realism, depth perception, comfort, and sense of presence in VR headset observation. Once identified, these factors are further discussed and assessed through a series of experiments and usability studies, based on a predefined set of research questions. More specifically, the role of familiarity with the observed place, the role of the environment characteristics shown to the viewer, and the role of the display used for the remote observation of the virtual environment are further investigated. To gain more insights, two usability studies are proposed with the aim of defining guidelines and best practices. The main outcomes from the two studies demonstrate that test users can experience an enhanced realistic observation when natural features, higher resolution displays, natural illumination, and high image contrast are used in Mobile VR. In terms of comfort, simple scene layouts and relaxing environments are considered ideal to reduce visual fatigue and eye strain. Furthermore, sense of presence increases when observed environments induce strong emotions, and depth perception improves in VR when several monocular cues such as lights and shadows are combined with binocular depth cues. Based on these results, this investigation then presents a focused evaluation on the outcomes and introduces an innovative eye-adapted High Dynamic Range (HDR) approach, which the author believes to be of great improvement in the context of remote observation when combined with eye-tracked VR headsets. Within this purpose, a third user study is proposed to compare static HDR and eye-adapted HDR observation in VR, to assess that the latter can improve realism, depth perception, sense of presence, and in certain cases even comfort. Results from this last study confirmed the author expectations, proving that eye-adapted HDR and eye tracking should be used to achieve best visual performances for remote observation in modern VR systems

    III: Small: Information Integration and Human Interaction for Indoor and Outdoor Spaces

    Get PDF
    The goal of this research project is to provide a framework model that integrates existing models of indoor and outdoor space, and to use this model to develop an interactive platform for navigation in mixed indoor and outdoor spaces. The user should feel the transition between inside and outside to be seamless, in terms of the navigational support provided. The approach consists of integration of indoors and outdoors on several levels: conceptual models (ontologies), formal system designs, data models, and human interaction. At the conceptual level, the project draws on existing ontologies as well as examining the affordances that the space provides. For example, an outside pedestrian walkway affords the same function as an inside corridor. Formal models of place and connection are also used to precisely specify the design of the navigational support system. Behavioral experiments with human participants assess the validity of our framework for supporting human spatial learning and navigation in integrated indoor and outdoor environments. These experiments also enable the identification and extraction of the salient features of indoor and outdoor spaces for incorporation into the framework. Findings from the human studies will help validate the efficacy of our formal framework for supporting human spatial learning and navigation in such integrated environments. Results will be distributed using the project Web site (www.spatial.maine.edu/IOspace) and will be incorporated into graduate level courses on human interaction with mobile devices, shared with public school teachers participating in the University of Maine\u27s NSF-funded RET (Research Experiences for Teachers). The research teams are working with two companies and one research center on technology transfer for building indoor-outdoor navigation tools with a wide range of applications, including those for the persons with disabilities

    III: Small: Information Integration and Human Interaction for Indoor and Outdoor Spaces

    Get PDF
    The goal of this research project is to provide a framework model that integrates existing models of indoor and outdoor space, and to use this model to develop an interactive platform for navigation in mixed indoor and outdoor spaces. The user should feel the transition between inside and outside to be seamless, in terms of the navigational support provided. The approach consists of integration of indoors and outdoors on several levels: conceptual models (ontologies), formal system designs, data models, and human interaction. At the conceptual level, the project draws on existing ontologies as well as examining the affordances that the space provides. For example, an outside pedestrian walkway affords the same function as an inside corridor. Formal models of place and connection are also used to precisely specify the design of the navigational support system. Behavioral experiments with human participants assess the validity of our framework for supporting human spatial learning and navigation in integrated indoor and outdoor environments. These experiments also enable the identification and extraction of the salient features of indoor and outdoor spaces for incorporation into the framework. Findings from the human studies will help validate the efficacy of our formal framework for supporting human spatial learning and navigation in such integrated environments. Results will be distributed using the project Web site (www.spatial.maine.edu/IOspace) and will be incorporated into graduate level courses on human interaction with mobile devices, shared with public school teachers participating in the University of Maine\u27s NSF-funded RET (Research Experiences for Teachers). The research teams are working with two companies and one research center on technology transfer for building indoor-outdoor navigation tools with a wide range of applications, including those for the persons with disabilities

    Exploring Virtual Reality and Doppelganger Avatars for the Treatment of Chronic Back Pain

    Get PDF
    Cognitive-behavioral models of chronic pain assume that fear of pain and subsequent avoidance behavior contribute to pain chronicity and the maintenance of chronic pain. In chronic back pain (CBP), avoidance of movements often plays a major role in pain perseverance and interference with daily life activities. In treatment, avoidance is often addressed by teaching patients to reduce pain behaviors and increase healthy behaviors. The current project explored the use of personalized virtual characters (doppelganger avatars) in virtual reality (VR), to influence motor imitation and avoidance, fear of pain and experienced pain in CBP. We developed a method to create virtual doppelgangers, to animate them with movements captured from real-world models, and to present them to participants in an immersive cave virtual environment (CAVE) as autonomous movement models for imitation. Study 1 investigated interactions between model and observer characteristics in imitation behavior of healthy participants. We tested the hypothesis that perceived affiliative characteristics of a virtual model, such as similarity to the observer and likeability, would facilitate observers’ engagement in voluntary motor imitation. In a within-subject design (N=33), participants were exposed to four virtual characters of different degrees of realism and observer similarity, ranging from an abstract stickperson to a personalized doppelganger avatar designed from 3d scans of the observer. The characters performed different trunk movements and participants were asked to imitate these. We defined functional ranges of motion (ROM) for spinal extension (bending backward, BB), lateral flexion (bending sideward, BS) and rotation in the horizontal plane (RH) based on shoulder marker trajectories as behavioral indicators of imitation. Participants’ ratings on perceived avatar appearance were recorded in an Autonomous Avatar Questionnaire (AAQ), based on an explorative factor analysis. Linear mixed effects models revealed that for lateral flexion (BS), a facilitating influence of avatar type on ROM was mediated by perceived identification with the avatar including avatar likeability, avatar-observer-similarity and other affiliative characteristics. These findings suggest that maximizing model-observer similarity may indeed be useful to stimulate observational modeling. Study 2 employed the techniques developed in study 1 with participants who suffered from CBP and extended the setup with real-world elements, creating an immersive mixed reality. The research question was whether virtual doppelgangers could modify motor behaviors, pain expectancy and pain. In a randomized controlled between-subject design, participants observed and imitated an avatar (AVA, N=17) or a videotaped model (VID, N=16) over three sessions, during which the movements BS and RH as well as a new movement (moving a beverage crate) were shown. Again, self-reports and ROMs were used as measures. The AVA group reported reduced avoidance with no significant group differences in ROM. Pain expectancy increased in AVA but not VID over the sessions. Pain and limitations did not significantly differ. We observed a moderation effect of group, with prior pain expectancy predicting pain and avoidance in the VID but not in the AVA group. This can be interpreted as an effect of personalized movement models decoupling pain behavior from movement-related fear and pain expectancy by increasing pain tolerance and task persistence. Our findings suggest that personalized virtual movement models can stimulate observational modeling in general, and that they can increase pain tolerance and persistence in chronic pain conditions. Thus, they may provide a tool for exposure and exercise treatments in cognitive behavioral treatment approaches to CBP

    Analyzing the Impact of Spatio-Temporal Sensor Resolution on Player Experience in Augmented Reality Games

    Get PDF
    Along with automating everyday tasks of human life, smartphones have become one of the most popular devices to play video games on due to their interactivity. Smartphones are embedded with various sensors which enhance their ability to adopt new new interaction techniques for video games. These integrated sen- sors, such as motion sensors or location sensors, make the device able to adopt new interaction techniques that enhance usability. However, despite their mobility and embedded sensor capacity, smartphones are limited in processing power and display area compared to desktop computer consoles. When it comes to evaluat- ing Player Experience (PX), players might not have as compelling an experience because the rich graphics environments that a desktop computer can provide are absent on a smartphone. A plausible alternative in this regard can be substituting the virtual game world with a real world game board, perceived through the device camera by rendering the digital artifacts over the camera view. This technology is widely known as Augmented Reality (AR). Smartphone sensors (e.g. GPS, accelerometer, gyro-meter, compass) have enhanced the capability for deploying Augmented Reality technology. AR has been applied to a large number of smartphone games including shooters, casual games, or puzzles. Because AR play environments are viewed through the camera, rendering the digital artifacts consistently and accurately is crucial because the digital characters need to move with respect to sensed orientation, then the accelerometer and gyroscope need to provide su ciently accurate and precise readings to make the game playable. In particular, determining the pose of the camera in space is vital as the appropriate angle to view the rendered digital characters are determined by the pose of the camera. This defines how well the players will be able interact with the digital game characters. Depending in the Quality of Service (QoS) of these sensors, the Player Experience (PX) may vary as the rendering of digital characters are affected by noisy sensors causing a loss of registration. Confronting such problem while developing AR games is di cult in general as it requires creating wide variety of game types, narratives, input modalities as well as user-testing. Moreover, current AR games developers do not have any specific guidelines for developing AR games, and concrete guidelines outlining the tradeoffs between QoS and PX for different genres and interaction techniques are required. My dissertation provides a complete view (a taxonomy) of the spatio-temporal sensor resolution depen- dency of the existing AR games. Four user experiments have been conducted and one experiment is proposed to validate the taxonomy and demonstrate the differential impact of sensor noise on gameplay of different genres of AR games in different aspect of PX. This analysis is performed in the context of a novel instru- mentation technology, which allows the controlled manipulation of QoS on position and orientation sensors. The experimental outcome demonstrated how the QoS of input sensor noise impacts the PX differently while playing AR game of different genre and the key elements creating this differential impact are - the input modality, narrative and game mechanics. Later, concrete guidelines are derived to regulate the sensor QoS as complete set of instructions to develop different genres or AR games
    corecore