859 research outputs found

    Research on integration of visual and motion cues for flight simulation and ride quality investigation

    Get PDF
    Vestibular perception and integration of several sensory inputs in simulation were studied. The relationship between tilt sensation induced by moving fields and those produced by actual body tilt is discussed. Linearvection studies were included and the application of the vestibular model for perception of orientation based on motion cues is presented. Other areas of examination includes visual cues in approach to landing, and a comparison of linear and nonlinear wash out filters using a model of the human vestibular system is given

    08231 Abstracts Collection -- Virtual Realities

    Get PDF
    From 1st to 6th June 2008, the Dagstuhl Seminar 08231 ``Virtual Realities\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. Virtual Reality (VR) is a multidisciplinary area of research aimed at interactive human-computer mediated simulations of artificial environments. Typical applications include simulation, training, scientific visualization, and entertainment. An important aspect of VR-based systems is the stimulation of the human senses -- typically sight, sound, and touch -- such that a user feels a sense of presence (or immersion) in the virtual environment. Different applications require different levels of presence, with corresponding levels of realism, sensory immersion, and spatiotemporal interactive fidelity. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. Links to extended abstracts or full papers are provided, if available

    Gravity as a Strong Prior: Implications for Perception and Action

    Get PDF
    In the future, humans are likely to be exposed to environments with altered gravity conditions, be it only visually (Virtual and Augmented Reality), or visually and bodily (space travel). As visually and bodily perceived gravity as well as an interiorized representation of earth gravity are involved in a series of tasks, such as catching, grasping, body orientation estimation and spatial inferences, humans will need to adapt to these new gravity conditions. Performance under earth gravity discrepant conditions has been shown to be relatively poor, and few studies conducted in gravity adaptation are rather discouraging. Especially in VR on earth, conflicts between bodily and visual gravity cues seem to make a full adaptation to visually perceived earth-discrepant gravities nearly impossible, and even in space, when visual and bodily cues are congruent, adaptation is extremely slow. We invoke a Bayesian framework for gravity related perceptual processes, in which earth gravity holds the status of a so called "strong prior". As other strong priors, the gravity prior has developed through years and years of experience in an earth gravity environment. For this reason, the reliability of this representation is extremely high and overrules any sensory information to its contrary. While also other factors such as the multisensory nature of gravity perception need to be taken into account, we present the strong prior account as a unifying explanation for empirical results in gravity perception and adaptation to earth-discrepant gravities

    The Effect of Prior Virtual Reality Experience on Locomotion and Navigation in Virtual Environments

    Get PDF
    VirtualReality(VR) is becoming more accessible and widely utilized in crucial disciplines like training, communication, healthcare, and education. One of the important parts of VR applications is walking through virtual environments. So, researchers have broadly studied various kinds of walking in VR as it can reduce sickness, improve the sense of presence, and enhance the general user experience. Due to the recent availability of consumer Head Mounted Displays (HMDs), people are using HMDs in all sorts of different locations. It underscores the need for locomotion methods that allow users to move through large Immersive Virtual Environments (IVEs) when occupying a small physical space or even seated. Although many aspects of locomotion in VR have received extensive research, very little work has considered how locomotive behaviors might change over time as users become more experienced in IVEs. As HMDs were rarely encountered outside of a lab before 2016, most locomotion research before this was likely conducted with VR novices who had no prior experience with the technology. However, as this is no longer the case, itis important to consider whether locomotive behaviors may evolve over time with user experience. This proposal specifically studies locomotive behaviors and effects that may adjust over time. For the first study, we conducted experiments measuring novice and experienced subjects’ gait parameters in VR and real environments. Prior research has established that users’ gait in virtual and real environments differs; however, little research has evaluated how users’ gait differs as users gain more experience with VR. We conducted experiments measuring novice and experienced subjects’ gait parameters in VR and real environments. Results showed that subjects’ performance in VR and Real World was more similar in the last trials than in the first trials; their walking dissimilarity in the start trials diminished by walking more trials. We found the trials a significant variable affecting the walking speed, step length, and trunk angle for both groups of users. While the main effect of expertise was not observed, an interaction effect between expertise and the trial number was shown. The trunk angle increased over time for novices but decreased for experts. These cond study reports the results of an experiment investigating how users’ behavior with two locomotion methods changed over four weeks: teleportation and joystick-based locomotion. Twenty novice VR users (no more than 1 hour prior experience with any form of walking in VR) were recruited. They loaned an Oculus Quest for four weeks on their own time, including an activity we provided them with. Results showed that the time required to complete the navigation task decreased faster for joystick-based locomotion. Spatial memory improved with time, particularly when using teleportation (which starts disadvantaged to joystick-based locomotion). Also, overall cyber sickness decreased slightly overtime; two dimensions of cyber sickness (nausea and disorientation) increased notably over time using joystick-based navigation. The next study presents the findings of a longitudinal research study investigating the effects of locomotion methods within virtual reality on participants’ spatial awareness during VR experiences and subsequent real-world gait parameters. The study encompasses two distinct environments: the real world and VR. In the real world setting, we analyze key gait parameters, including walking speed, distance traveled, and stepcount, both pre and post-VR exposure, to perceive the influence of VR locomotion on post-VR gait behavior. Additionally, we assess participants’ spatial awareness and the occurrence of simulator sickness, considering two locomotion methods: joy stick and teleportation. Our results reveal significant changes in gait parameters associated with increased VR locomotion experience. Furthermore, we observe a remarkable reduction in cyber sickness symptoms over successive VR sessions, particularly evident among participants utilizing joy stick locomotion. This study contributes to the understanding of gait behavior influenced by VR locomotion technology and the duration of VR immersion. Together, these studies inform how locomotion and navigation behavior may change in VR as users become more accustomed to walking in virtual reality settings. Also, comparative studies on locomotion methods help VR developers to implement the better-suited locomotion method. Thus, it provides knowledge to design and develop VR systems to perform better for different applications and groups of users

    Augmented Reality in Minimally Invasive Surgery

    Get PDF
    In the last 15 years Minimally Invasive Surgery, with techniques such as laparoscopy or endoscopy, has become very important and research in this field is increasing since these techniques provide the surgeons with less invasive means of reaching the patient’s internal anatomy and allow for entire procedures to be performed with only minimal trauma to the patient. The advantages of the use of this surgical method are evident for patients because the possible trauma is reduced, postoperative recovery is generally faster and there is less scarring. Despite the improvement in outcomes, indirect access to the operation area causes restricted vision, difficulty in hand-eye coordination, limited mobility handling instruments, two-dimensional imagery with a lack of detailed information and a limited visual field during the whole operation. The use of the emerging Augmented Reality technology shows the way forward by bringing the advantages of direct visualization (which you have in open surgery) back to minimally invasive surgery and increasing the physician's view of his surroundings with information gathered from patient medical images. Augmented Reality can avoid some drawbacks of Minimally Invasive Surgery and can provide opportunities for new medical treatments. After two decades of research into medical Augmented Reality, this technology is now advanced enough to meet the basic requirements for a large number of medical applications and it is feasible that medical AR applications will be accepted by physicians in order to evaluate their use and integration into the clinical workflow. Before seeing the systematic use of these technologies as support for minimally invasive surgery some improvements are still necessary in order to fully satisfy the requirements of operating physicians

    Structuring a virtual environment for sport training: A case study on rowing technique

    Get PDF
    The advancements in technology and the possibility of their integration in the domain of virtual environments allow access to new application domains previously limited to highly expensive setups. This is specifically the case of sport training that can take advantage of the improved quality of measurement systems and computing techniques. Given this the challenge that emerges is related to the way training is performed and how it is possible to evaluate the transfer from the virtual setup to the real case. In this work we discuss the aspect of system architecture for a VE in sport training, taking as a case study a rowing training system. The paper will address in particular the challenges of training technique in rowing

    Compensating for Distance Compression in Audiovisual Virtual Environments Using Incongruence

    Get PDF

    Low-Cost Objective Measurement of Prehension Skills

    Get PDF
    This thesis aims to explore the feasibility of using low-cost, portable motion capture tools for the quantitative assessment of sequential 'reach-to-grasp' and repetitive 'finger-tapping' movements in neurologically intact and deficit populations, both in clinical and non-clinical settings. The research extends the capabilities of an existing optoelectronic postural sway assessment tool (PSAT) into a more general Boxed Infrared Gross Kinematic Assessment Tool (BIGKAT) to evaluate prehensile control of hand movements outside the laboratory environment. The contributions of this work include the validation of BIGKAT against a high-end motion capture system (Optotrak) for accuracy and precision in tracking kinematic data. BIGKAT was subsequently applied to kinematically resolve prehensile movements, where concurrent recordings with Optotrak demonstrate similar statistically significant results for five kinematic measures, two spatial measures (Maximum Grip Aperture – MGA, Peak Velocity – PV) and three temporal measures (Movement Time – MT, Time to MGA – TMGA, Time to PV – TPV). Regression analysis further establishes a strong relationship between BIGKAT and Optotrak, with nearly unity slope and low y-intercept values. Results showed reliable performance of BIGKAT and its ability to produce similar statistically significant results as Optotrak. BIGKAT was also applied to quantitatively assess bradykinesia in Parkinson's patients during finger-tapping movements. The system demonstrated significant differences between PD patients and healthy controls in key kinematic measures, paving the way for potential clinical applications. The study characterized kinematic differences in prehensile control in different sensory environments using a Virtual Reality head mounted display and finger tracking system (the Leap Motion), emphasizing the importance of sensory information during hand movements. This highlighted the role of hand vision and haptic feedback during initial and final phases of prehensile movement trajectory. The research also explored marker-less pose estimation using deep learning tools, specifically DeepLabCut (DLC), for reach-to-grasp tracking. Despite challenges posed by COVID-19 limitations on data collection, the study showed promise in scaling reaching and grasping components but highlighted the need for diverse datasets to resolve kinematic differences accurately. To facilitate the assessment of prehension activities, an Event Detection Tool (EDT) was developed, providing temporal measures for reaction time, reaching time, transport time, and movement time during object grasping and manipulation. Though initial pilot data was limited, the EDT holds potential for insights into disease progression and movement disorder severity. Overall, this work contributes to the advancement of low-cost, portable solutions for quantitatively assessing upper-limb movements, demonstrating the potential for wider clinical use and guiding future research in the field of human movement analysis

    Human Visual Navigation: Effects of Visual Context, Navigation Mode, and Gender

    Get PDF
    Abstract This thesis extends research on human visual path integration using optic flow cues. In three experiments, a large-scale path-completion task was contextualised within highly-textured authentic virtual environments. Real-world navigational experience was further simulated, through the inclusion of a large roundabout on the route. Three semi-surrounding screens provided a wide field of view. Participants were able to perform the task, but directional estimates showed characteristic errors, which can be explained with a model of distance misperception on the outbound roads of the route. Display and route layout parameters had very strong effects on performance. Gender and navigation mode were also influential. Participants consistently underestimated the final turn angle when simulated self-motion was viewed passively, on large projection screens in a driving simulator. Error increased with increasing size of the internal angle, on route layouts based on equilateral or isosceles triangles. A compressed range of responses was found. Higher overall accuracy was observed when a display with smaller desktop computer monitors was used; especially when simulated self-motion was actively controlled with a steering wheel and foot pedals, rather than viewed passively. Patterns and levels of error depended on route layout, which included triangles with non-equivalent lengths of the two outbound roads. A powerful effect on performance was exerted by the length of the "approach segment" on the route: that is, the distance travelled on the first outbound road, combined with the distance travelled between the two outbound roads on the roundabout curve. The final turn angle was generally overestimated on routes with a long approach segment (those with a long first road and a 60° or 90° internal angle), and underestimated on routes with a short approach segment (those with a short first road or the 120° internal angle). Accuracy was higher for active participants on routes with longer approach segments and on 90° angle trials, and for passive participants on routes with shorter approach segments and on 120° angle trials. Active participants treated all internal angles as 90° angles. Participants performed with lower overall accuracy when optic flow information was disrupted, through the intermittent presentation of self-motion on the small-screen display, in a sequence of static snapshots of the route. Performance was particularly impaired on routes with a long approach segment, but quite accurate on those with a short approach segment. Consistent overestimation of the final angle was observed, and error decreased with increasing size of the internal angle. Participants treated all internal angles as 120° angles. The level of available visual information did not greatly affect estimates, in general. The degree of curvature on the roundabout mainly influenced estimates by female participants in the Passive condition. Compared with males, females performed less accurately in the driving simulator, and with reduced optic flow cues; but more accurately with the small-screen display on layouts with a short approach segment, and when they had active control of the self-motion. The virtual environments evoked a sense of presence, but this had no effect on task performance, in general. The environments could be used for training navigational skills where high precision is not required

    User-centered Virtual Environment Assessment And Design For Cognitive Rehabilitation Applications

    Get PDF
    Virtual environment (VE) design for cognitive rehabilitation necessitates a new methodology to ensure the validity of the resulting rehabilitation assessment. We propose that benchmarking the VE system technology utilizing a user-centered approach should precede the VE construction. Further, user performance baselines should be measured throughout testing as a control for adaptive effects that may confound the metrics chosen to evaluate the rehabilitation treatment. To support these claims we present data obtained from two modules of a user-centered head-mounted display (HMD) assessment battery, specifically resolution visual acuity and stereoacuity. Resolution visual acuity and stereoacuity assessments provide information about the image quality achieved by an HMD based upon its unique system parameters. When applying a user-centered approach, we were able to quantify limitations in the VE system components (e.g., low microdisplay resolution) and separately point to user characteristics (e.g., changes in dark focus) that may introduce error in the evaluation of VE based rehabilitation protocols. Based on these results, we provide guidelines for calibrating and benchmarking HMDs. In addition, we discuss potential extensions of the assessment to address higher level usability issues. We intend to test the proposed framework within the Human Experience Modeler (HEM), a testbed created at the University of Central Florida to evaluate technologies that may enhance cognitive rehabilitation effectiveness. Preliminary results of a feasibility pilot study conducted with a memory impaired participant showed that the HEM provides the control and repeatability needed to conduct such technology comparisons. Further, the HEM affords the opportunity to integrate new brain imaging technologies (i.e., functional Near Infrared Imaging) to evaluate brain plasticity associated with VE based cognitive rehabilitation
    corecore