451,134 research outputs found

    Wide-angle, off-axis, see-through head-mounted display

    Get PDF
    A 60-deg-field-of-view optical see-through head-mounted display (HMD) using off-axis optics has been designed for 3-D medical imaging visualization. Two basic on-axis optical design concepts for see-though HMDs are reviewed first, to motivate the design of an off-axis optical form. An off-axis design is then presented. Because HMDs are typically designed from the pupil of the eye to the miniature display, it is common to assess final performance according to the display characteristics. Such analysis, however, does not provide information that is easily translated into task-based performance metric. Therefore, we present an analysis of the performance of the design from a usability viewpoint. For this analysis, the optical system is ray-traced from the display to the eye. Three key measures of performance-accommodation, astigmatism, and chromatic blur-are presented over the field of regard using customized graphical output

    Calibration Methods of Characterization Lens for Head Mounted Displays

    Get PDF
    This thesis concerns the calibration, characterization and utilization of the HMD Eye, OptoFidelity’s eye-mimicking optical camera system designed for the HMD IQ, a complete test station for near eye displays which are implemented in virtual and augmented reality systems. Its optical architecture provides a 120 degree field of view with high imaging performance and linear radial distortion, ideal for analysis of all possible object fields. HMD Eye has an external, mechanical entrance pupil that is of the same size as the human entrance pupil. Spatial frequency response (the modulation transfer function) has been used to develop sensor focus calibration methods and automation system plans. Geometrical distortion and its relation to the angular mapping function and imaging quality of the system are also considered. The nature of the user interface for human eyes, called the eyebox, and the optical properties of head mounted displays are reviewed. Head mounted displays consist usually of two near eye displays amongst other components, such as position tracking units. The HMD Eye enables looking inside the device from the eyebox and collecting optical signals (i.e. the virtual image) from the complete field of view of the device under test with a single image. The HMD Eye under inspection in this thesis is one of the ’zero’ batch, i.e. a test unit. The outcome of the calibration was that the HMD Eye unit in this thesis is focused to 1.6 m with an approximate error margin of ±10 cm. The drop of contrast reaches 50% approximately at angular frequency of 11 cycles/degree which is about 40% of the simulated values, prompting improvements in the mechanical design. Geometrical distortion results show that radial distortion is very linear (maximum error of 1%) and that tangential distortion has a diminishable effect (0.04 degrees of azimuth deviation at most) within the measurement region

    Low-cost eye-tracking for human computer interaction

    Get PDF
    Knowing the user\u27s point of gaze has long held the promise of being a useful methodology for human computer interaction. However, a number of barriers have stood in the way of the integration of eye tracking into everyday applications, including the intrusiveness, robustness, availability, and price of eye-tracking systems. The goal of this thesis is to lower these barriers so that eye tracking can be used to enhance current human computer interfaces. An eye-tracking system was developed. The system consists of an open-hardware design for a digital eye tracker that can be built from low-cost off-the-shelf components, and a set of open-source software tools for digital image capture, manipulation, and analysis in eye-tracking applications. Both infrared and visible spectrum eye-tracking algorithms were developed and used to calculate the user\u27s point of gaze in two types of eye tracking systems, head-mounted and remote eye trackers. The accuracy of eye tracking was found to be approximately one degree of visual angle. It is expected that the availability of this system will facilitate the development of eye-tracking applications and the eventual integration of eye tracking into the next generation of everyday human computer interfaces

    A device for the objective assessment of ADHD using eye movements

    Get PDF
    Attention deficit hyperactivity disorder (ADHD) is a commonly diagnosed psychiatric disorder characterized by lack of focus, self-control,and hyperactivity. ADHD is difficult to diagnose without extensive observation by an expert, and even then is often misdiagnosed. Current methods of pediatric diagnosis rely on subjective measures of activity and behavior relative to other children [3]. Proper diagnosis is critical in preventing unnecessary prescription of the powerful, habit-forming nature of the drugs used to manage ADHD, such as Adderall and Ritalin [1][5]. Research has shown that patients with ADHD show abnormalities in reading tests and antisaccade tests, as these tests gauge ability to focus and suppress impulsive behavior [2][6][4]. This project proposes to create a dedicated device that will use eye movement analysis to accurately and objectively screen children for ADHD. The device will be inexpensive and easy to use for school nurses, optometrists, and primary care physicians. First, research was conducted to decide the type of eye tracker to build, the tests that would be run, the layout of the device, and the type of headgear to use. After the preliminary research was completed, it was decided that a limbus eye tracker would best fit the needed functionality of the device. Limbus tracking is both more accurate in horizontal tracking and less costly than other systems. A basic circuit diagram has been created and circuit parts have been ordered. The IR LED and phototransistors have been tested and appear to be working properly, but further testing will be conducted and mounting for the components will be constructed. One problem encountered was the selection of a computational module that incorporates our needs for digital I/O, A/D conversion, significant processing power and speed, DOS-basedoperating system, and VGA output. No single board computer yet found incorporates all these features in one module without being too costly. The team is awaiting a decision concerning Sternheimer funding before exploring the use of more cost-effective strategies. Another point of discussion among the team was how to affix the device to a child’s head or keep a child’s head still enough for the eye tracker to be accurate. The result was a preliminary design utilizing safety glasses. The next steps in this project include deciding upon a single board computer and ordering it and ordering more circuit parts and safety glasses. While these parts come in, the circuit design can be enhanced, an approach for the programming portion will be created.https://scholarscompass.vcu.edu/capstone/1007/thumbnail.jp

    Development and Calibration of an Eye-Tracking Fixation Identification Algorithm for Immersive Virtual Reality

    Full text link
    [EN] Fixation identification is an essential task in the extraction of relevant information from gaze patterns; various algorithms are used in the identification process. However, the thresholds used in the algorithms greatly affect their sensitivity. Moreover, the application of these algorithm to eye-tracking technologies integrated into head-mounted displays, where the subject's head position is unrestricted, is still an open issue. Therefore, the adaptation of eye-tracking algorithms and their thresholds to immersive virtual reality frameworks needs to be validated. This study presents the development of a dispersion-threshold identification algorithm applied to data obtained from an eye-tracking system integrated into a head-mounted display. Rules-based criteria are proposed to calibrate the thresholds of the algorithm through different features, such as number of fixations and the percentage of points which belong to a fixation. The results show that distance-dispersion thresholds between 1-1.6 degrees and time windows between0.25-0.4s are the acceptable range parameters, with 1 degrees and0.25s being the optimum. The work presents a calibrated algorithm to be applied in future experiments with eye-tracking integrated into head-mounted displays and guidelines for calibrating fixation identification algorithmsWe thank Pepe Roda Belles for the development of the virtual reality environment and the integration of the HMD with Unity platform. We also thank Masoud Moghaddasi for useful discussions and recommendations.Llanes-Jurado, J.; Marín-Morales, J.; Guixeres Provinciale, J.; Alcañiz Raya, ML. (2020). Development and Calibration of an Eye-Tracking Fixation Identification Algorithm for Immersive Virtual Reality. Sensors. 20(17):1-15. https://doi.org/10.3390/s20174956S1152017Cipresso, P., Giglioli, I. A. C., Raya, M. A., & Riva, G. (2018). The Past, Present, and Future of Virtual and Augmented Reality Research: A Network and Cluster Analysis of the Literature. Frontiers in Psychology, 9. doi:10.3389/fpsyg.2018.02086Chicchi Giglioli, I. A., Pravettoni, G., Sutil Martín, D. L., Parra, E., & Raya, M. A. (2017). A Novel Integrating Virtual Reality Approach for the Assessment of the Attachment Behavioral System. Frontiers in Psychology, 8. doi:10.3389/fpsyg.2017.00959Marín-Morales, J., Higuera-Trujillo, J. L., De-Juan-Ripoll, C., Llinares, C., Guixeres, J., Iñarra, S., & Alcañiz, M. (2019). Navigation Comparison between a Real and a Virtual Museum: Time-dependent Differences using a Head Mounted Display. Interacting with Computers, 31(2), 208-220. doi:10.1093/iwc/iwz018Kober, S. E., Kurzmann, J., & Neuper, C. (2012). Cortical correlate of spatial presence in 2D and 3D interactive virtual reality: An EEG study. International Journal of Psychophysiology, 83(3), 365-374. doi:10.1016/j.ijpsycho.2011.12.003Borrego, A., Latorre, J., Llorens, R., Alcañiz, M., & Noé, E. (2016). Feasibility of a walking virtual reality system for rehabilitation: objective and subjective parameters. Journal of NeuroEngineering and Rehabilitation, 13(1). doi:10.1186/s12984-016-0174-1Clemente, M., Rodríguez, A., Rey, B., & Alcañiz, M. (2014). Assessment of the influence of navigation control and screen size on the sense of presence in virtual reality using EEG. Expert Systems with Applications, 41(4), 1584-1592. doi:10.1016/j.eswa.2013.08.055Borrego, A., Latorre, J., Alcañiz, M., & Llorens, R. (2018). Comparison of Oculus Rift and HTC Vive: Feasibility for Virtual Reality-Based Exploration, Navigation, Exergaming, and Rehabilitation. Games for Health Journal, 7(3), 151-156. doi:10.1089/g4h.2017.0114Jensen, L., & Konradsen, F. (2017). A review of the use of virtual reality head-mounted displays in education and training. Education and Information Technologies, 23(4), 1515-1529. doi:10.1007/s10639-017-9676-0Jost, T. A., Drewelow, G., Koziol, S., & Rylander, J. (2019). A quantitative method for evaluation of 6 degree of freedom virtual reality systems. Journal of Biomechanics, 97, 109379. doi:10.1016/j.jbiomech.2019.109379Chandrasekera, T., Fernando, K., & Puig, L. (2019). Effect of Degrees of Freedom on the Sense of Presence Generated by Virtual Reality (VR) Head-Mounted Display Systems: A Case Study on the Use of VR in Early Design Studios. Journal of Educational Technology Systems, 47(4), 513-522. doi:10.1177/0047239518824862Bălan, O., Moise, G., Moldoveanu, A., Leordeanu, M., & Moldoveanu, F. (2020). An Investigation of Various Machine and Deep Learning Techniques Applied in Automatic Fear Level Detection and Acrophobia Virtual Therapy. Sensors, 20(2), 496. doi:10.3390/s20020496Armstrong, T., & Olatunji, B. O. (2012). Eye tracking of attention in the affective disorders: A meta-analytic review and synthesis. Clinical Psychology Review, 32(8), 704-723. doi:10.1016/j.cpr.2012.09.004Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124(3), 372-422. doi:10.1037/0033-2909.124.3.372Irwin, D. E. (1992). Memory for position and identity across eye movements. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18(2), 307-317. doi:10.1037/0278-7393.18.2.307Tanriverdi, V., & Jacob, R. J. K. (2000). Interacting with eye movements in virtual environments. Proceedings of the SIGCHI conference on Human factors in computing systems - CHI ’00. doi:10.1145/332040.332443Skulmowski, A., Bunge, A., Kaspar, K., & Pipa, G. (2014). Forced-choice decision-making in modified trolley dilemma situations: a virtual reality and eye tracking study. Frontiers in Behavioral Neuroscience, 8. doi:10.3389/fnbeh.2014.00426Juvrud, J., Gredebäck, G., Åhs, F., Lerin, N., Nyström, P., Kastrati, G., & Rosén, J. (2018). The Immersive Virtual Reality Lab: Possibilities for Remote Experimental Manipulations of Autonomic Activity on a Large Scale. Frontiers in Neuroscience, 12. doi:10.3389/fnins.2018.00305Hessels, R. S., Niehorster, D. C., Nyström, M., Andersson, R., & Hooge, I. T. C. (2018). Is the eye-movement field confused about fixations and saccades? A survey among 124 researchers. Royal Society Open Science, 5(8), 180502. doi:10.1098/rsos.180502Diaz, G., Cooper, J., Kit, D., & Hayhoe, M. (2013). Real-time recording and classification of eye movements in an immersive virtual environment. Journal of Vision, 13(12), 5-5. doi:10.1167/13.12.5Duchowski, A. T., Medlin, E., Gramopadhye, A., Melloy, B., & Nair, S. (2001). Binocular eye tracking in VR for visual inspection training. Proceedings of the ACM symposium on Virtual reality software and technology - VRST ’01. doi:10.1145/505008.505010Lim, J. Z., Mountstephens, J., & Teo, J. (2020). Emotion Recognition Using Eye-Tracking: Taxonomy, Review and Current Challenges. Sensors, 20(8), 2384. doi:10.3390/s20082384Manor, B. R., & Gordon, E. (2003). Defining the temporal threshold for ocular fixation in free-viewing visuocognitive tasks. Journal of Neuroscience Methods, 128(1-2), 85-93. doi:10.1016/s0165-0270(03)00151-1Salvucci, D. D., & Goldberg, J. H. (2000). Identifying fixations and saccades in eye-tracking protocols. Proceedings of the symposium on Eye tracking research & applications - ETRA ’00. doi:10.1145/355017.355028Duchowski, A., Medlin, E., Cournia, N., Murphy, H., Gramopadhye, A., Nair, S., … Melloy, B. (2002). 3-D eye movement analysis. Behavior Research Methods, Instruments, & Computers, 34(4), 573-591. doi:10.3758/bf03195486Bobic, V., & Graovac, S. (2016). Development, implementation and evaluation of new eye tracking methodology. 2016 24th Telecommunications Forum (TELFOR). doi:10.1109/telfor.2016.7818800Sidenmark, L., & Lundström, A. (2019). Gaze behaviour on interacted objects during hand interaction in virtual reality for eye tracking calibration. Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications. doi:10.1145/3314111.3319815Alghamdi, N., & Alhalabi, W. (2019). Fixation Detection with Ray-casting in Immersive Virtual Reality. International Journal of Advanced Computer Science and Applications, 10(7). doi:10.14569/ijacsa.2019.0100710Blignaut, P. (2009). Fixation identification: The optimum threshold for a dispersion algorithm. Attention, Perception, & Psychophysics, 71(4), 881-895. doi:10.3758/app.71.4.881Shic, F., Scassellati, B., & Chawarska, K. (2008). The incomplete fixation measure. Proceedings of the 2008 symposium on Eye tracking research & applications - ETRA ’08. doi:10.1145/1344471.1344500Vive Pro Eyehttps://www.vive.com/us

    Vitreo-retinal eye surgery robot : sustainable precision

    Get PDF
    Vitreo-retinal eye surgery encompasses the surgical procedures performed on the vitreous humor and the retina. A procedure typically consists of the removal of the vitreous humor, the peeling of a membrane and/or the repair of a retinal detachment. Vitreo-retinal surgery is performed minimal invasively. Small needle shaped instruments are inserted into the eye. Instruments are manipulated by hand in four degrees of freedom about the insertion point. Two rotations move the instrument tip laterally, in addition to a translation in axial instrument direction and a rotation about its longitudinal axis. The manipulation of the instrument tip, e.g. a gripping motion can be considered as a fifth degree of freedom. While performing vitreo-retinal surgery manually, the surgeon faces various challenges. Typically, delicate micrometer range thick tissue is operated, for which steady hand movements and high accuracy instrument manipulation are required. Lateral instrument movements are inverted by the pivoting insertion point and scaled depending on the instrument insertion depth. A maximum of two instruments can be used simultaneously. There is nearly no perception of surgical forces, since most forces are below the human detection limit. Therefore, the surgeon relies only on visual feedback, obtained via a microscope or endoscope. Both vision systems force the surgeon to work in a static and non ergonomic body posture. Although the surgeon’s proficiency improves throughout his career, hand tremor will become a problem at higher age. Robotically assisted surgery with a master-slave system can assist the surgeon in these challenges. The slave system performs the actual surgery, by means of instrument manipulators which handle the instruments. The surgeon remains in control of the instruments by operating haptic interfaces via a master. Using electronic hardware and control software, the master and slave are connected. Amongst others, advantages as tremor filtering, up-scaled force feedback, down-scaled motions and stabilized instrument positioning will enhance dexterity on surgical tasks. Furthermore, providing the surgeon an ergonomic body posture will prolong the surgeon’s career. This thesis focuses on the design and realization of a high precision slave system for eye surgery. The master-slave system uses a table mounted design, where the system is compact, lightweight, easy to setup and equipped to perform a complete intervention. The slave system consists of two main parts: the instrument manipulators and their passive support system. Requirements are derived from manual eye surgery, conversations with medical specialists and analysis of the human anatomy and vitreo-retinal interventions. The passive support system provides a stiff connection between the instrument manipulator, patient and surgical table. Given the human anatomical diversity, presurgical adjustments can be made to allow the instrument manipulators to be positioned over each eye. Most of the support system is integrated within the patient’s headrest. On either the left or right side, two exchangeable manipulator-support arms can be installed onto the support system, depending on the eye being operated upon. The compact, lightweight and easy to install design, allows for a short setup time and quick removal in case of a complication. The slave system’s surgical reach is optimized to emulate manually performed surgery. For bimanual instrument operation, two instrument manipulators are used. Additional instrument manipulators can be used for non-active tools e.g. an illumination probe or an endoscope. An instrument manipulator allows the same degrees of freedom and a similar reach as manually performed surgery. Instrument forces are measured to supply force feedback to the surgeon via haptic interfaces. The instrument manipulator is designed for high stiffness, is play free and has low friction to allow tissue manipulation with high accuracy. Each instrument manipulator is equipped with an on board instrument change system, by which instruments can be changed in a fast and secure way. A compact design near the instrument allows easy access to the surgical area, leaving room for the microscope and peripheral equipment. The acceptance of a surgical robot for eye surgery mostly relies on equipment safety and reliability. The design of the slave system features various safety measures, e.g. a quick release mechanism for the instrument manipulator and additional locks on the pre-surgical adjustment fixation clamp. Additional safety measures are proposed, like a hard cover over the instrument manipulator and redundant control loops in the controlling FPGA. A method to fixate the patient’s head to the headrest by use of a custom shaped polymer mask is proposed. Two instrument manipulators and their passive support system have been realized so far, and the first experimental results confirm the designed low actuation torque and high precision performance

    Animated virtual agents to cue user attention: comparison of static and dynamic deictic cues on gaze and touch responses

    Get PDF
    This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participant’s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging users’ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the users’ eye movements and touch responses
    • …
    corecore