1,003 research outputs found

    User-centered virtual environment design for virtual rehabilitation

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>As physical and cognitive rehabilitation protocols utilizing virtual environments transition from single applications to comprehensive rehabilitation programs there is a need for a new design cycle methodology. Current human-computer interaction designs focus on usability without benchmarking technology within a user-in-the-loop design cycle. The field of virtual rehabilitation is unique in that determining the efficacy of this genre of computer-aided therapies requires prior knowledge of technology issues that may confound patient outcome measures. Benchmarking the technology (e.g., displays or data gloves) using healthy controls may provide a means of characterizing the "normal" performance range of the virtual rehabilitation system. This standard not only allows therapists to select appropriate technology for use with their patient populations, it also allows them to account for technology limitations when assessing treatment efficacy.</p> <p>Methods</p> <p>An overview of the proposed user-centered design cycle is given. Comparisons of two optical see-through head-worn displays provide an example of benchmarking techniques. Benchmarks were obtained using a novel vision test capable of measuring a user's stereoacuity while wearing different types of head-worn displays. Results from healthy participants who performed both virtual and real-world versions of the stereoacuity test are discussed with respect to virtual rehabilitation design.</p> <p>Results</p> <p>The user-centered design cycle argues for benchmarking to precede virtual environment construction, especially for therapeutic applications. Results from real-world testing illustrate the general limitations in stereoacuity attained when viewing content using a head-worn display. Further, the stereoacuity vision benchmark test highlights differences in user performance when utilizing a similar style of head-worn display. These results support the need for including benchmarks as a means of better understanding user outcomes, especially for patient populations.</p> <p>Conclusions</p> <p>The stereoacuity testing confirms that without benchmarking in the design cycle poor user performance could be misconstrued as resulting from the participant's injury state. Thus, a user-centered design cycle that includes benchmarking for the different sensory modalities is recommended for accurate interpretation of the efficacy of the virtual environment based rehabilitation programs.</p

    Photorealistic True-Dimensional Visualization of Remote Panoramic Views for VR Headsets

    Get PDF
    © 2023 IEEE. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/Virtual Reality headsets have evolved to include unprecedented display quality. Meantime, they have become light-weight, wireless and low-cost, which has opened to new applications and a much wider audience. Photo-based omnidirectional imaging has also developed, becoming directly exploitable for VR, with their combination proven suitable for: remote visits and realistic scene reconstruction, operator’s training and control panels, surveillance and e-tourism. There is however a limited amount of scientific work assessing VR experience and user’s performance in photo-based environment representations. This paper focuses on assessing the effect of photographic realism in VR when observing real places through a VR headset, for two different pixel-densities of the display, environment types and familiarity levels. Our comparison relies on the observation of static three-dimensional and omnidirectional photorealistic views of environments. The aim is to gain an insight about how photographic texture can affect perceived realness, sense of presence and provoked emotions, as well as perception of image-lighting and actual space dimension (true-dimension). Two user studies are conducted based on subjective rating and measurements given by users to a number of display and human factors. The display pixel-density affected the perceived image-lighting and prevailed over better lighting specs. The environment illumination and distance to objects generally played a stronger role than display. The environment affected the perceived image-lighting, spatial presence, depth impression and specific emotions. Distances to a set of objects were generally accurately estimated. Place familiarity enhanced perceived realism and presence. They confirmed some previous studies, but also introduced new elements.Peer reviewe

    Agent Based Modeling in Computer Graphics and Games

    Get PDF
    As graphics technology has improved in recent years, more and more importance has been placed on the behavior of virtual characters in applications set in virtual worlds in areas such as games, movies and simulations. The behavior of virtual characters should be believable in order to create the illusion that these virtual worlds are populated with living characters. This has led to the application of agent-based modeling to the control of these virtual characters. There are a number of advantages of using agent-based modeling techniques which include the fact that they remove the requirement for hand controlling all agents in a virtual environment, and allow agents in games to respond to unexpected actions by players

    Illuminating palaeolithic art using virtual reality: A new method for integrating dynamic firelight into interpretations of art production and use

    Get PDF
    Approaches to Palaeolithic art have increasingly shifted beyond the traditional focus on engraved or depicted forms in isolation, to appreciating the sensorial experience of art making as integral to shaping the form of depictions and the meaning imbued within them. This kind of research appreciates an array of factors pertinent to how the art may have been understood or experienced by people during the Palaeolithic, including placement, lighting, accessibility, sound, and tactility. This paper contributes to this “sensory turn” in Palaeolithic art research, arguing that the roving light cast by the naked flame of fires, torches or lamps is an important dimension in understanding artistic experiences. However, capturing these effects, whether during analysis, as part of interpretation, or presentation, can be challenging. A new method is presented in virtual reality (VR) modelling – applied to Palaeolithic art contexts for the first time - as a safe and non-destructive means of simulating dynamic light sources to facilitate analysis, interpretation, and presentation of Palaeolithic art under actualistic lighting conditions. VR was applied to two Magdalenian case studies: parietal art from Las Monedas (Spain) and portable stone plaquettes from Montastruc (France). VR models were produced using Unity software and digital models of the art captured via whitelight (Montastruc) and photogrammetric (Las Monedas) scans. The results demonstrate that this novel application of VR facilitates the testing of hypotheses related to the sensorial and experiential dimensions of Palaeolithic art, allowing discussions of these elements to be elevated beyond theoretical ideas

    A Novel Haptic Simulator for Evaluating and Training Salient Force-Based Skills for Laparoscopic Surgery

    Get PDF
    Laparoscopic surgery has evolved from an \u27alternative\u27 surgical technique to currently being considered as a mainstream surgical technique. However, learning this complex technique holds unique challenges to novice surgeons due to their \u27distance\u27 from the surgical site. One of the main challenges in acquiring laparoscopic skills is the acquisition of force-based or haptic skills. The neglect of popular training methods (e.g., the Fundamentals of Laparoscopic Surgery, i.e. FLS, curriculum) in addressing this aspect of skills training has led many medical skills professionals to research new, efficient methods for haptic skills training. The overarching goal of this research was to demonstrate that a set of simple, simulator-based haptic exercises can be developed and used to train users for skilled application of forces with surgical tools. A set of salient or core haptic skills that underlie proficient laparoscopic surgery were identified, based on published time-motion studies. Low-cost, computer-based haptic training simulators were prototyped to simulate each of the identified salient haptic skills. All simulators were tested for construct validity by comparing surgeons\u27 performance on the simulators with the performance of novices with no previous laparoscopic experience. An integrated, \u27core haptic skills\u27 simulator capable of rendering the three validated haptic skills was built. To examine the efficacy of this novel salient haptic skills training simulator, novice participants were tested for training improvements in a detailed study. Results from the study demonstrated that simulator training enabled users to significantly improve force application for all three haptic tasks. Research outcomes from this project could greatly influence surgical skills simulator design, resulting in more efficient training

    Remote Visual Observation of Real Places Through Virtual Reality Headsets

    Get PDF
    Virtual Reality has always represented a fascinating yet powerful opportunity that has attracted studies and technology developments, especially since the latest release on the market of powerful high-resolution and wide field-of-view VR headsets. While the great potential of such VR systems is common and accepted knowledge, issues remain related to how to design systems and setups capable of fully exploiting the latest hardware advances. The aim of the proposed research is to study and understand how to increase the perceived level of realism and sense of presence when remotely observing real places through VR headset displays. Hence, to produce a set of guidelines that give directions to system designers about how to optimize the display-camera setup to enhance performance, focusing on remote visual observation of real places. The outcome of this investigation represents unique knowledge that is believed to be very beneficial for better VR headset designs towards improved remote observation systems. To achieve the proposed goal, this thesis presents a thorough investigation of existing literature and previous researches, which is carried out systematically to identify the most important factors ruling realism, depth perception, comfort, and sense of presence in VR headset observation. Once identified, these factors are further discussed and assessed through a series of experiments and usability studies, based on a predefined set of research questions. More specifically, the role of familiarity with the observed place, the role of the environment characteristics shown to the viewer, and the role of the display used for the remote observation of the virtual environment are further investigated. To gain more insights, two usability studies are proposed with the aim of defining guidelines and best practices. The main outcomes from the two studies demonstrate that test users can experience an enhanced realistic observation when natural features, higher resolution displays, natural illumination, and high image contrast are used in Mobile VR. In terms of comfort, simple scene layouts and relaxing environments are considered ideal to reduce visual fatigue and eye strain. Furthermore, sense of presence increases when observed environments induce strong emotions, and depth perception improves in VR when several monocular cues such as lights and shadows are combined with binocular depth cues. Based on these results, this investigation then presents a focused evaluation on the outcomes and introduces an innovative eye-adapted High Dynamic Range (HDR) approach, which the author believes to be of great improvement in the context of remote observation when combined with eye-tracked VR headsets. Within this purpose, a third user study is proposed to compare static HDR and eye-adapted HDR observation in VR, to assess that the latter can improve realism, depth perception, sense of presence, and in certain cases even comfort. Results from this last study confirmed the author expectations, proving that eye-adapted HDR and eye tracking should be used to achieve best visual performances for remote observation in modern VR systems

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (NSF SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Integration of multiple data types in 3-D immersive virtual reality (VR) environments

    Get PDF
    Intelligent sensors have begun to play a key part in the monitoring and maintenance of complex infrastructures. Sensors have the capability not only to provide raw data, but also provide information by indicating the reliability of the measurements. The effect of this added information is a voluminous increase in the total data that is gathered. If an operator is required to perceive the state of a complex system, novel methods must be developed for sifting through enormous data sets. Virtual reality (VR) platforms are proposed as ideal candidates for performing this task-- a virtual world will allow the user to experience a complex system that is gathering a multitude of sensor data and are referred as Integrated Awareness models. This thesis presents techniques for visualizing such multiple data sets, specifically - graphical, measurement and health data inside a 3-D VR environment. The focus of this thesis is to develop pathways to generate the required 3-D models without sacrificing visual fidelity. The tasks include creating the visual representation, integrating multi-sensor measurements, creating user-specific visualizations and a performance evaluation of the completed virtual environment

    User-centered Virtual Environment Assessment And Design For Cognitive Rehabilitation Applications

    Get PDF
    Virtual environment (VE) design for cognitive rehabilitation necessitates a new methodology to ensure the validity of the resulting rehabilitation assessment. We propose that benchmarking the VE system technology utilizing a user-centered approach should precede the VE construction. Further, user performance baselines should be measured throughout testing as a control for adaptive effects that may confound the metrics chosen to evaluate the rehabilitation treatment. To support these claims we present data obtained from two modules of a user-centered head-mounted display (HMD) assessment battery, specifically resolution visual acuity and stereoacuity. Resolution visual acuity and stereoacuity assessments provide information about the image quality achieved by an HMD based upon its unique system parameters. When applying a user-centered approach, we were able to quantify limitations in the VE system components (e.g., low microdisplay resolution) and separately point to user characteristics (e.g., changes in dark focus) that may introduce error in the evaluation of VE based rehabilitation protocols. Based on these results, we provide guidelines for calibrating and benchmarking HMDs. In addition, we discuss potential extensions of the assessment to address higher level usability issues. We intend to test the proposed framework within the Human Experience Modeler (HEM), a testbed created at the University of Central Florida to evaluate technologies that may enhance cognitive rehabilitation effectiveness. Preliminary results of a feasibility pilot study conducted with a memory impaired participant showed that the HEM provides the control and repeatability needed to conduct such technology comparisons. Further, the HEM affords the opportunity to integrate new brain imaging technologies (i.e., functional Near Infrared Imaging) to evaluate brain plasticity associated with VE based cognitive rehabilitation
    corecore