443 research outputs found

    User-centered virtual environment design for virtual rehabilitation

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>As physical and cognitive rehabilitation protocols utilizing virtual environments transition from single applications to comprehensive rehabilitation programs there is a need for a new design cycle methodology. Current human-computer interaction designs focus on usability without benchmarking technology within a user-in-the-loop design cycle. The field of virtual rehabilitation is unique in that determining the efficacy of this genre of computer-aided therapies requires prior knowledge of technology issues that may confound patient outcome measures. Benchmarking the technology (e.g., displays or data gloves) using healthy controls may provide a means of characterizing the "normal" performance range of the virtual rehabilitation system. This standard not only allows therapists to select appropriate technology for use with their patient populations, it also allows them to account for technology limitations when assessing treatment efficacy.</p> <p>Methods</p> <p>An overview of the proposed user-centered design cycle is given. Comparisons of two optical see-through head-worn displays provide an example of benchmarking techniques. Benchmarks were obtained using a novel vision test capable of measuring a user's stereoacuity while wearing different types of head-worn displays. Results from healthy participants who performed both virtual and real-world versions of the stereoacuity test are discussed with respect to virtual rehabilitation design.</p> <p>Results</p> <p>The user-centered design cycle argues for benchmarking to precede virtual environment construction, especially for therapeutic applications. Results from real-world testing illustrate the general limitations in stereoacuity attained when viewing content using a head-worn display. Further, the stereoacuity vision benchmark test highlights differences in user performance when utilizing a similar style of head-worn display. These results support the need for including benchmarks as a means of better understanding user outcomes, especially for patient populations.</p> <p>Conclusions</p> <p>The stereoacuity testing confirms that without benchmarking in the design cycle poor user performance could be misconstrued as resulting from the participant's injury state. Thus, a user-centered design cycle that includes benchmarking for the different sensory modalities is recommended for accurate interpretation of the efficacy of the virtual environment based rehabilitation programs.</p

    High-dynamic-range Foveated Near-eye Display System

    Get PDF
    Wearable near-eye display has found widespread applications in education, gaming, entertainment, engineering, military training, and healthcare, just to name a few. However, the visual experience provided by current near-eye displays still falls short to what we can perceive in the real world. Three major challenges remain to be overcome: 1) limited dynamic range in display brightness and contrast, 2) inadequate angular resolution, and 3) vergence-accommodation conflict (VAC) issue. This dissertation is devoted to addressing these three critical issues from both display panel development and optical system design viewpoints. A high-dynamic-range (HDR) display requires both high peak brightness and excellent dark state. In the second and third chapters, two mainstream display technologies, namely liquid crystal display (LCD) and organic light emitting diode (OLED), are investigated to extend their dynamic range. On one hand, LCD can easily boost its peak brightness to over 1000 nits, but it is challenging to lower the dark state to \u3c 0.01 nits. To achieve HDR, we propose to use a mini-LED local dimming backlight. Based on our simulations and subjective experiments, we establish practical guidelines to correlate the device contrast ratio, viewing distance, and required local dimming zone number. On the other hand, self-emissive OLED display exhibits a true dark state, but boosting its peak brightness would unavoidably cause compromised lifetime. We propose a systematic approach to enhance OLED\u27s optical efficiency while keeping indistinguishable angular color shift. These findings will shed new light to guide future HDR display designs. In Chapter four, in order to improve angular resolution, we demonstrate a multi-resolution foveated display system with two display panels and an optical combiner. The first display panel provides wide field of view for peripheral vision, while the second panel offers ultra-high resolution for the central fovea. By an optical minifying system, both 4x and 5x enhanced resolutions are demonstrated. In addition, a Pancharatnam-Berry phase deflector is applied to actively shift the high-resolution region, in order to enable eye-tracking function. The proposed design effectively reduces the pixelation and screen-door effect in near-eye displays. The VAC issue in stereoscopic displays is believed to be the main cause of visual discomfort and fatigue when wearing VR headsets. In Chapter five, we propose a novel polarization-multiplexing approach to achieve multiplane display. A polarization-sensitive Pancharatnam-Berry phase lens and a spatial polarization modulator are employed to simultaneously create two independent focal planes. This method enables generation of two image planes without the need of temporal multiplexing. Therefore, it can effectively reduce the frame rate by one-half. In Chapter six, we briefly summarize our major accomplishments

    Requirement analysis and sensor specifications – First version

    Get PDF
    In this first version of the deliverable, we make the following contributions: to design the WEKIT capturing platform and the associated experience capturing API, we use a methodology for system engineering that is relevant for different domains such as: aviation, space, and medical and different professions such as: technicians, astronauts, and medical staff. Furthermore, in the methodology, we explore the system engineering process and how it can be used in the project to support the different work packages and more importantly the different deliverables that will follow the current. Next, we provide a mapping of high level functions or tasks (associated with experience transfer from expert to trainee) to low level functions such as: gaze, voice, video, body posture, hand gestures, bio-signals, fatigue levels, and location of the user in the environment. In addition, we link the low level functions to their associated sensors. Moreover, we provide a brief overview of the state-of-the-art sensors in terms of their technical specifications, possible limitations, standards, and platforms. We outline a set of recommendations pertaining to the sensors that are most relevant for the WEKIT project taking into consideration the environmental, technical and human factors described in other deliverables. We recommend Microsoft Hololens (for Augmented reality glasses), MyndBand and Neurosky chipset (for EEG), Microsoft Kinect and Lumo Lift (for body posture tracking), and Leapmotion, Intel RealSense and Myo armband (for hand gesture tracking). For eye tracking, an existing eye-tracking system can be customised to complement the augmented reality glasses, and built-in microphone of the augmented reality glasses can capture the expert’s voice. We propose a modular approach for the design of the WEKIT experience capturing system, and recommend that the capturing system should have sufficient storage or transmission capabilities. Finally, we highlight common issues associated with the use of different sensors. We consider that the set of recommendations can be useful for the design and integration of the WEKIT capturing platform and the WEKIT experience capturing API to expedite the time required to select the combination of sensors which will be used in the first prototype.WEKI

    Maintaining fixation by children in a virtual reality version of pupil perimetry

    Get PDF
    The assessment of visual field sensitivities in young children continues to be a challenge. Children often do not sit still, fail to fixate stimuli for longer durations, and have limited verbal capacity to report visibility. We investigated the use of a head-mounted VR display, gaze-contingent flicker pupil perimetry (gcFPP), and three fixation stimulus conditions to determine best practices for optimal fixation and pupil response quality. A total of twenty children (3-11y) passively fixated a dot, counted the repeated appearance of an animated character, and watched an animated movie in separate trials of 80s each. We presented large flickering patches at different eccentricities and angles in the periphery to evoke pupillary oscillations (20 locations, 4s per location). The results showed that gaze precision and accuracy did not differ significantly across the fixation conditions but pupil amplitudes were strongest for the dot and count task. We recommend the use of the fixation counting task for pupil perimetry because children enjoyed it the most and it achieved strongest pupil responses. The VR set-up appears to be an ideal apparatus for children to allow free range of movement, an engaging visual task, and reliable eye measurements

    Object-based attentional expectancies in virtual reality

    Get PDF
    Modern virtual reality (VR) technology has the promise to enable neuroscientists and psychologists to conduct ecologically valid experiments, while maintaining precise experimental control. However, in recent studies, game engines like Unreal Engine or Unity, are used for stimulus creation and data collection. Yet game engines do not provide the underlying architecture to measure the time of stimulus events and behavioral input with the accuracy or precision required by many experiments. Furthermore, it is currently not well understood, if VR and the underlying technology engages the same cognitive processes as a comparable real-world situation. Similarly, not much is known, if experimental findings obtained in a standard monitor-based experiment, are comparable to those obtained in VR by using a head-mounted display (HMD) or if the different stimulus devices also engage different cognitive processes. The aim of my thesis was to investigate if modern HMDs affect the early processing of basic visual features differently than a standard computer monitor. In the first project (chapter 1), I developed a new behavioral paradigm, to investigate how prediction errors of basic object features are processed. In a series of four experiments, the results consistently indicated that simultaneous prediction errors for unexpected colors and orientations are processed independently on an early level of processing, before object binding comes into play. My second project (chapter 2) examined the accuracy and precision of stimulus timing and reaction time measurements, when using Unreal Engine 4 (UE4) in combination with a modern HMD system. My results demonstrate that stimulus durations can be defined and controlled with high precision and accuracy. However, reaction time measurements turned out to be highly imprecise and inaccurate, when using UE4’s standard application programming interface (API). Instead, I proposed a new software-based approach to circumvent these limitations. Timings benchmarks confirmed that the method can measure reaction times with a precision and accuracy in the millisecond range. In the third project (chapter 3), I directly compared the task performance in the paradigm developed in chapter 1 between the original experimental setup and a virtual reality simulation of this experiment. To establish two identical experimental setups, I recreated the entire physical environment in which the experiments took place within VR and blended the virtual replica over the physical lab. As a result, the virtual environment (VE) corresponded not only visually with the physical laboratory but also provided accurate sensory properties of other modalities, such as haptic or acoustic feedback. The results showed a comparable task performance in both the non-VR and the VR experiments, suggesting that modern HMDs do not affect early processing of basic visual features differently than a typical computer monitor

    User-centered Virtual Environment Assessment And Design For Cognitive Rehabilitation Applications

    Get PDF
    Virtual environment (VE) design for cognitive rehabilitation necessitates a new methodology to ensure the validity of the resulting rehabilitation assessment. We propose that benchmarking the VE system technology utilizing a user-centered approach should precede the VE construction. Further, user performance baselines should be measured throughout testing as a control for adaptive effects that may confound the metrics chosen to evaluate the rehabilitation treatment. To support these claims we present data obtained from two modules of a user-centered head-mounted display (HMD) assessment battery, specifically resolution visual acuity and stereoacuity. Resolution visual acuity and stereoacuity assessments provide information about the image quality achieved by an HMD based upon its unique system parameters. When applying a user-centered approach, we were able to quantify limitations in the VE system components (e.g., low microdisplay resolution) and separately point to user characteristics (e.g., changes in dark focus) that may introduce error in the evaluation of VE based rehabilitation protocols. Based on these results, we provide guidelines for calibrating and benchmarking HMDs. In addition, we discuss potential extensions of the assessment to address higher level usability issues. We intend to test the proposed framework within the Human Experience Modeler (HEM), a testbed created at the University of Central Florida to evaluate technologies that may enhance cognitive rehabilitation effectiveness. Preliminary results of a feasibility pilot study conducted with a memory impaired participant showed that the HEM provides the control and repeatability needed to conduct such technology comparisons. Further, the HEM affords the opportunity to integrate new brain imaging technologies (i.e., functional Near Infrared Imaging) to evaluate brain plasticity associated with VE based cognitive rehabilitation

    A Wearable Head-mounted Projection Display

    Get PDF
    Conventional head-mounted projection displays (HMPDs) contain of a pair of miniature projection lenses, beamsplitters, and miniature displays mounted on the helmet, as well as a retro-reflective screen placed strategically in the environment. We have extened the HMPD technology integrating the screen into a fully mobile embodiment. Some initial efforts of demonstrating this technology has been captured followed by an investigation of the diffraction effects versus image degradation caused by integrating the retro-reflective screen within the HMPD. The key contribution of this research is the conception and development of a mobileHMPD (M-HMPD). We have included an extensive analysis of macro- and microscopic properties that encompass the retro-reflective screen. Furthermore, an evaluation of the overall performance of the optics will be assessed in both object space for the optical designer and visual space for the possible users of this technology. This research effort will also be focused on conceiving a mobile M-HMPD aimed for dual indoor/outdoor applications. The M-HMPD shares the known advantage such as ultralightweight optics (i.e. 8g per eye), unperceptible distortion (i.e. ≤ 2.5%), and lightweight headset (i.e. ≤ 2.5 lbs) compared with eyepiece type head-mounted displays (HMDs) of equal eye relief and field of view. In addition, the M-HMPD also presents an advantage over the preexisting HMPD in that it does not require a retro-reflective screen placed strategically in the environment. This newly developed M-HMPD has the ability to project clear images at three different locations within near- or far-field observation depths without loss of image quality. This particular M-HMPD embodiment was targeted to mixed reality, augmented reality, and wearable display applications

    Apple Vision Pro for Healthcare: "The Ultimate Display"? -- Entering the Wonderland of Precision Medicine

    Full text link
    At the Worldwide Developers Conference (WWDC) in June 2023, Apple introduced the Vision Pro. The Vision Pro is a Mixed Reality (MR) headset, more specifically it is a Virtual Reality (VR) device with an additional Video See-Through (VST) capability. The VST capability turns the Vision Pro also into an Augmented Reality (AR) device. The AR feature is enabled by streaming the real world via cameras to the (VR) screens in front of the user's eyes. This is of course not unique and similar to other devices, like the Varjo XR-3. Nevertheless, the Vision Pro has some interesting features, like an inside-out screen that can show the headset wearers' eyes to "outsiders" or a button on the top, called "Digital Crown", that allows you to seamlessly blend digital content with your physical space by turning it. In addition, it is untethered, except for the cable to the battery, which makes the headset more agile, compared to the Varjo XR-3. This could actually come closer to the "Ultimate Display", which Ivan Sutherland had already sketched in 1965. Not available to the public yet, like the Ultimate Display, we want to take a look into the crystal ball in this perspective to see if it can overcome some clinical challenges that - especially - AR still faces in the medical domain, but also go beyond and discuss if the Vision Pro could support clinicians in essential tasks to spend more time with their patients.Comment: This is a Preprint under CC BY. This work was supported by NIH/NIAID R01AI172875, NIH/NCATS UL1 TR001427, the REACT-EU project KITE and enFaced 2.0 (FWF KLI 1044). B. Puladi was funded by the Medical Faculty of the RWTH Aachen University as part of the Clinician Scientist Program. C. Gsaxner was funded by the Advanced Research Opportunities Program from the RWTH Aachen Universit
    • …
    corecore