97 research outputs found

    Eye-tracking assistive technologies for individuals with amyotrophic lateral sclerosis

    Get PDF
    Amyotrophic lateral sclerosis, also known as ALS, is a progressive nervous system disorder that affects nerve cells in the brain and spinal cord, resulting in the loss of muscle control. For individuals with ALS, where mobility is limited to the movement of the eyes, the use of eye-tracking-based applications can be applied to achieve some basic tasks with certain digital interfaces. This paper presents a review of existing eye-tracking software and hardware through which eye-tracking their application is sketched as an assistive technology to cope with ALS. Eye-tracking also provides a suitable alternative as control of game elements. Furthermore, artificial intelligence has been utilized to improve eye-tracking technology with significant improvement in calibration and accuracy. Gaps in literature are highlighted in the study to offer a direction for future research

    Adaptive Real-Time Image Processing for Human-Computer Interaction

    Get PDF

    Eye-Tracking Assistive Technologies for Individuals with Amyotrophic Lateral Sclerosis

    Get PDF
    Amyotrophic lateral sclerosis, also known as ALS, is a progressive nervous system disorder that affects nerve cells in the brain and spinal cord, resulting in the loss of muscle control. For individuals with ALS, where mobility is limited to the movement of the eyes, the use of eye-tracking-based applications can be applied to achieve some basic tasks with certain digital interfaces. This paper presents a review of existing eye-tracking software and hardware through which eye-tracking their application is sketched as an assistive technology to cope with ALS. Eye-tracking also provides a suitable alternative as control of game elements. Furthermore, artificial intelligence has been utilized to improve eye-tracking technology with significant improvement in calibration and accuracy. Gaps in literature are highlighted in the study to offer a direction for future research

    Regression Based Gaze Estimation with Natural Head Movement

    Get PDF
    This thesis presents a non-contact, video-based gaze tracking system using novel eye detection and gaze estimation techniques. The objective of the work is to develop a real-time gaze tracking system that is capable of estimating the gaze accurately under natural head movement. The system contains both hardware and software components. The hardware of the system is responsible for illuminating the scene and capturing facial images for further computer analysis, while the software implements the core technique of gaze tracking which consists of two main modules, i.e., eye detection subsystem and gaze estimation subsystem. The proposed gaze tracking technique uses image plane features, namely, the inter-pupil vector (IPV) and the image center-inter pupil center vector (IC-IPCV) to improve gaze estimation precision under natural head movement. A support vector regression (SVR) based estimation method using image plane features along with traditional pupil center-cornea reflection (PC-CR) vector is also proposed to estimate the gaze. The designed gaze tracking system can work in real-time and achieve an overall estimation accuracy of 0.84º with still head and 2.26º under natural head movement. By using the SVR method for off-line processing, the estimation accuracy with head movement can be improved to 1.12º while providing a tolerance of 10cm×8cm×5cm head movement

    An Intelligent and Low-cost Eye-tracking System for Motorized Wheelchair Control

    Full text link
    In the 34 developed and 156 developing countries, there are about 132 million disabled people who need a wheelchair constituting 1.86% of the world population. Moreover, there are millions of people suffering from diseases related to motor disabilities, which cause inability to produce controlled movement in any of the limbs or even head.The paper proposes a system to aid people with motor disabilities by restoring their ability to move effectively and effortlessly without having to rely on others utilizing an eye-controlled electric wheelchair. The system input was images of the users eye that were processed to estimate the gaze direction and the wheelchair was moved accordingly. To accomplish such a feat, four user-specific methods were developed, implemented and tested; all of which were based on a benchmark database created by the authors.The first three techniques were automatic, employ correlation and were variants of template matching, while the last one uses convolutional neural networks (CNNs). Different metrics to quantitatively evaluate the performance of each algorithm in terms of accuracy and latency were computed and overall comparison is presented. CNN exhibited the best performance (i.e. 99.3% classification accuracy), and thus it was the model of choice for the gaze estimator, which commands the wheelchair motion. The system was evaluated carefully on 8 subjects achieving 99% accuracy in changing illumination conditions outdoor and indoor. This required modifying a motorized wheelchair to adapt it to the predictions output by the gaze estimation algorithm. The wheelchair control can bypass any decision made by the gaze estimator and immediately halt its motion with the help of an array of proximity sensors, if the measured distance goes below a well-defined safety margin.Comment: Accepted for publication in Sensor, 19 Figure, 3 Table

    Hand eye coordination in surgery

    Get PDF
    The coordination of the hand in response to visual target selection has always been regarded as an essential quality in a range of professional activities. This quality has thus far been elusive to objective scientific measurements, and is usually engulfed in the overall performance of the individuals. Parallels can be drawn to surgery, especially Minimally Invasive Surgery (MIS), where the physical constraints imposed by the arrangements of the instruments and visualisation methods require certain coordination skills that are unprecedented. With the current paradigm shift towards early specialisation in surgical training and shortened focused training time, selection process should identify trainees with the highest potentials in certain specific skills. Although significant effort has been made in objective assessment of surgical skills, it is only currently possible to measure surgeons’ abilities at the time of assessment. It has been particularly difficult to quantify specific details of hand-eye coordination and assess innate ability of future skills development. The purpose of this thesis is to examine hand-eye coordination in laboratory-based simulations, with a particular emphasis on details that are important to MIS. In order to understand the challenges of visuomotor coordination, movement trajectory errors have been used to provide an insight into the innate coordinate mapping of the brain. In MIS, novel spatial transformations, due to a combination of distorted endoscopic image projections and the “fulcrum” effect of the instruments, accentuate movement generation errors. Obvious differences in the quality of movement trajectories have been observed between novices and experts in MIS, however, this is difficult to measure quantitatively. A Hidden Markov Model (HMM) is used in this thesis to reveal the underlying characteristic movement details of a particular MIS manoeuvre and how such features are exaggerated by the introduction of rotation in the endoscopic camera. The proposed method has demonstrated the feasibility of measuring movement trajectory quality by machine learning techniques without prior arbitrary classification of expertise. Experimental results have highlighted these changes in novice laparoscopic surgeons, even after a short period of training. The intricate relationship between the hands and the eyes changes when learning a skilled visuomotor task has been previously studied. Reactive eye movement, when visual input is used primarily as a feedback mechanism for error correction, implies difficulties in hand-eye coordination. As the brain learns to adapt to this new coordinate map, eye movements then become predictive of the action generated. The concept of measuring this spatiotemporal relationship is introduced as a measure of hand-eye coordination in MIS, by comparing the Target Distance Function (TDF) between the eye fixation and the instrument tip position on the laparoscopic screen. Further validation of this concept using high fidelity experimental tasks is presented, where higher cognitive influence and multiple target selection increase the complexity of the data analysis. To this end, Granger-causality is presented as a measure of the predictability of the instrument movement with the eye fixation pattern. Partial Directed Coherence (PDC), a frequency-domain variation of Granger-causality, is used for the first time to measure hand-eye coordination. Experimental results are used to establish the strengths and potential pitfalls of the technique. To further enhance the accuracy of this measurement, a modified Jensen-Shannon Divergence (JSD) measure has been developed for enhancing the signal matching algorithm and trajectory segmentations. The proposed framework incorporates high frequency noise filtering, which represents non-purposeful hand and eye movements. The accuracy of the technique has been demonstrated by quantitative measurement of multiple laparoscopic tasks by expert and novice surgeons. Experimental results supporting visual search behavioural theory are presented, as this underpins the target selection process immediately prior to visual motor action generation. The effects of specialisation and experience on visual search patterns are also examined. Finally, pilot results from functional brain imaging are presented, where the Posterior Parietal Cortical (PPC) activation is measured using optical spectroscopy techniques. PPC has been demonstrated to involve in the calculation of the coordinate transformations between the visual and motor systems, which establishes the possibilities of exciting future studies in hand-eye coordination

    Intelligent Agent Architectures: Reactive Planning Testbed

    Get PDF
    An Integrated Agent Architecture (IAA) is a framework or paradigm for constructing intelligent agents. Intelligent agents are collections of sensors, computers, and effectors that interact with their environments in real time in goal-directed ways. Because of the complexity involved in designing intelligent agents, it has been found useful to approach the construction of agents with some organizing principle, theory, or paradigm that gives shape to the agent's components and structures their relationships. Given the wide variety of approaches being taken in the field, the question naturally arises: Is there a way to compare and evaluate these approaches? The purpose of the present work is to develop common benchmark tasks and evaluation metrics to which intelligent agents, including complex robotic agents, constructed using various architectural approaches can be subjected
    corecore