4 research outputs found

    Gaze Estimation Technique for Directing Assistive Robotics

    Get PDF
    AbstractAssistive robotics may extend capabilities for individuals with reduced mobility or dexterity. However, effective use of robotic agents typically requires the user to issue control commands in the form of speech, gesture, or text. Thus, for unskilled or impaired users, the need for a paradigm of intuitive Human-Robot Interaction (HRI) is prevalent. It can be inferred that the most productive interactions are those in which the assistive agent is able to ascertain the intention of the user. Also, to perform a task, the agent must know the user's area of attention in three-dimensional space. Eye gaze tracking can be used as a method to determine a specific Volume of Interest (VOI). However, gaze tracking has heretofore been under-utilized as a means of interaction and control in 3D space. This research aims to determine a practical volume of interest in which an individual's eyes are focused by combining past methods in order to achieve greater effectiveness. The proposed method makes use of eye vergence as a useful depth discriminant to generate a tool for improved robot path planning. This research investigates the accuracy of the Vector Intersection (VI) model when applied to a usably large workspace volume. A neural network is also used in tandem with the VI model to create a combined model. The output of the combined model is a VOI that can be used as an aid in a number of applications including robot path planning, entertainment, ubiquitous computing, and others

    A technique for estimating three-dimensional volume-of-interest using eye gaze

    Get PDF
    Assistive robotics promises to be of use to those who have limited mobility or dexterity. Moreover, those who have limited movement of limbs can benefit greatly from such assistive devices. However, to use such devices, one would need to give commands to an assistive agent, often in the form of speech, gesture, or text. The need for a more convenient method of Human-Robot Interaction (HRI) is prevalent, especially for impaired users because of severe mobility constraints. For a socially responsive assistive device to be an effective aid, the device generally should understand the intention of the user. Also, to perform a task based on gesture, the assistive device requires the user's area of attention in three-dimensional (3D) space. Gaze tracking can be used as a method to determine a specific volume of interest (VOI). However, heretofore gaze tracking has been under-utilized as a means of interaction and control in 3Dspace. The main objective of this research is to determine a practical VOI in which an individual's eyes are focused by combining existing methods. Achieving this objective sets a foundation for further use of vergence data as a useful discriminant to generate a proper directive technique for assistive robotics. This research investigates the accuracy of the Vector Intersection (VI) model when applied to a usable workspace. A neural network is also applied to gaze data for use in tandem with the VI model to create a Combined Model. The output of the Combined Model is a VOI that can be used to aid in a number of applications including robot path planning, entertainment, ubiquitous computing, and others. An alternative Search Region method is investigated as well

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    Gaze computer interaction on stereo display

    No full text
    corecore