947 research outputs found

    Gaze-based teleprosthetic enables intuitive continuous control of complex robot arm use: Writing & drawing

    Get PDF
    Eye tracking is a powerful mean for assistive technologies for people with movement disorders, paralysis and amputees. We present a highly intuitive eye tracking-controlled robot arm operating in 3-dimensional space based on the user's gaze target point that enables tele-writing and drawing. The usability and intuitive usage was assessed by a “tele” writing experiment with 8 subjects that learned to operate the system within minutes of first time use. These subjects were naive to the system and the task and had to write three letters on a white board with a white board pen attached to the robot arm's endpoint. The instructions are to imagine they were writing text with the pen and look where the pen would be going, they had to write the letters as fast and as accurate as possible, given a letter size template. Subjects were able to perform the task with facility and accuracy, and movements of the arm did not interfere with subjects ability to control their visual attention so as to enable smooth writing. On the basis of five consecutive trials there was a significant decrease in the total time used and the total number of commands sent to move the robot arm from the first to the second trial but no further improvement thereafter, suggesting that within writing 6 letters subjects had mastered the ability to control the system. Our work demonstrates that eye tracking is a powerful means to control robot arms in closed-loop and real-time, outperforming other invasive and non-invasive approaches to Brain-Machine-Interfaces in terms of calibration time (<;2 minutes), training time (<;10 minutes), interface technology costs. We suggests that gaze-based decoding of action intention may well become one of the most efficient ways to interface with robotic actuators - i.e. Brain-Robot-Interfaces - and become useful beyond paralysed and amputee users also for the general teleoperation of robotic and exoskeleton in human augmentation

    Development of a head-mounted, eye-tracking system for dogs

    Get PDF
    Growing interest in canine cognition and visual perception has promoted research into the allocation of visual attention during free-viewing tasks in the dog. The techniques currently available to study this (i.e. preferential looking) have, however, lacked spatial accuracy, permitting only gross judgements of the location of the dog’s point of gaze and are limited to a laboratory setting. Here we describe a mobile, head-mounted, video-based, eye-tracking system and a procedure for achieving standardised calibration allowing an output with accuracy of 2-3º. The setup allows free movement of dogs; in addition the procedure does not involve extensive training skills, and is completely non-invasive. This apparatus has the potential to allow the study of gaze patterns in a variety of research applications and could enhance the study of areas such as canine vision, cognition and social interactions

    Comparison of eye tracking, electrooculography and an auditory brain-computer interface for binary communication: a case study with a participant in the locked-in state

    Get PDF
    Background In this study, we evaluated electrooculography (EOG), an eye tracker and an auditory brain-computer interface (BCI) as access methods to augmentative and alternative communication (AAC). The participant of the study has been in the locked-in state (LIS) for 6 years due to amyotrophic lateral sclerosis. He was able to communicate with slow residual eye movements, but had no means of partner independent communication. We discuss the usability of all tested access methods and the prospects of using BCIs as an assistive technology. Methods Within four days, we tested whether EOG, eye tracking and a BCI would allow the participant in LIS to make simple selections. We optimized the parameters in an iterative procedure for all systems. Results The participant was able to gain control over all three systems. Nonetheless, due to the level of proficiency previously achieved with his low-tech AAC method, he did not consider using any of the tested systems as an additional communication channel. However, he would consider using the BCI once control over his eye muscles would no longer be possible. He rated the ease of use of the BCI as the highest among the tested systems, because no precise eye movements were required; but also as the most tiring, due to the high level of attention needed to operate the BCI. Conclusions In this case study, the partner based communication was possible due to the good care provided and the proficiency achieved by the interlocutors. To ease the transition from a low-tech AAC method to a BCI once control over all muscles is lost, it must be simple to operate. For persons, who rely on AAC and are affected by a progressive neuromuscular disease, we argue that a complementary approach, combining BCIs and standard assistive technology, can prove valuable to achieve partner independent communication and ease the transition to a purely BCI based approach. Finally, we provide further evidence for the importance of a user-centered approach in the design of new assistive devices

    Graphene textiles towards soft wearable interfaces for electroocular remote control of objects

    Get PDF
    Study of eye movements (EMs) and measurement of the resulting biopotentials, referred to as electrooculography (EOG), may find increasing use in applications within the domain of activity recognition, context awareness, mobile human-computer interaction (HCI) applications, and personalized medicine provided that the limitations of conventional “wet” electrodes are addressed. To overcome the limitations of conventional electrodes, this work, reports for the first time the use and characterization of graphene-based electroconductive textile electrodes for EOG acquisition using a custom-designed embedded eye tracker. This self-contained wearable device consists of a headband with integrated textile electrodes and a small, pocket-worn, battery-powered hardware with real-time signal processing which can stream data to a remote device over Bluetooth. The feasibility of the developed gel-free, flexible, dry textile electrodes was experimentally authenticated through side-by-side comparison with pre-gelled, wet, silver/silver chloride (Ag/AgCl) electrodes, where the simultaneously and asynchronous recorded signals displayed correlation of up to ~87% and ~91% respectively over durations reaching hundred seconds and repeated on several participants. Additionally, an automatic EM detection algorithm is developed and the performance of the graphene-embedded “all-textile” EM sensor and its application as a control element toward HCI is experimentally demonstrated. The excellent success rate ranging from 85% up to 100% for eleven different EM patterns demonstrates the applicability of the proposed algorithm in wearable EOG-based sensing and HCI applications with graphene textiles. The system-level integration and the holistic design approach presented herein which starts from fundamental materials level up to the architecture and algorithm stage is highlighted and will be instrumental to advance the state-of-the-art in wearable electronic devices based on sensing and processing of electrooculograms

    Validating and improving the correction of ocular artifacts in electro-encephalography

    Get PDF
    For modern applications of electro-encephalography, including brain computer interfaces and single-trial Event Related Potential detection, it is becoming increasingly important that artifacts are accurately removed from a recorded electro-encephalogram (EEG) without affecting the part of the EEG that reflects cerebral activity. Ocular artifacts are caused by movement of the eyes and the eyelids. They occur frequently in the raw EEG and are often the most prominent artifacts in EEG recordings. Their accurate removal is therefore an important procedure in nearly all electro-encephalographic research. As a result of this, a considerable number of ocular artifact correction methods have been introduced over the past decades. A selection of these methods, which contains some of the most frequently used correction methods, is given in Section 1.5. When two different correction methods are applied to the same raw EEG, this usually results in two different corrected EEGs. A measure for the accuracy of correction should indicate how well each of these corrected EEGs recovers the part of the raw EEG that truly reflects cerebral activity. The fact that this accuracy cannot be determined directly from a raw EEG is intrinsic to the need for artifact removal. If, based on a raw EEG, it would be possible to derive an exact reference on what the corrected EEG should be, then there would not be any need for adequate artifact correction methods. Estimating the accuracy of correction methods is mostly done either by using models to simulate EEGs and artifacts, or by manipulating the experimental data in such a way that the effects of artifacts to the raw EEG can be isolated. In this thesis, modeling of EEG and artifact is used to validate correction methods based on simulated data. A new correction method is introduced which, unlike all existing methods, uses a camera to monitor eye(lid) movements as a basis for ocular artifact correction. The simulated data is used to estimate the accuracy of this new correction method and to compare it against the estimated accuracy of existing correction methods. The results of this comparison suggest that the new method significantly increases correction accuracy compared to the other methods. Next, an experiment is performed, based on which the accuracy of correction can be estimated on raw EEGs. Results on this experimental data comply very well with the results on the simulated data. It is therefore concluded that using a camera during EEG recordings provides valuable extra information that can be used in the process of ocular artifact correction. In Chapter 2, a model is introduced that assists in estimating the accuracy of eye movement artifacts for simulated EEG recordings. This model simulates EEG and eye movement artifacts simultaneously. For this, the model uses a realistic representation of the head, multiple dipoles to model cerebral and ocular electrical activity, and the boundary element method to calculate changes in electrical potential at different positions on the scalp. With the model, it is possible to simulate different data sets as if they are recorded using different electrode configurations. Signal to noise ratios are used to assess the accuracy of these six correction methods for various electrode configurations before and after applying six different correction methods. Results show that out of the six methods, second order blind identification, SOBI, and multiple linear regression, MLR, correct most accurately overall as they achieve the highest rise in signal to noise ratio. The occurrence of ocular artifacts is linked to changes in eyeball orientation. In Chapter 2 an eye tracker is used to record pupil position, which is closely linked to eyeball orientation. The pupil position information is used in the model to simulate eye movements. Recognizing the potential benefit of using an eye tracker not only for simulations, but also for correction, Chapter 3 introduces an eye movement artifact correction method that exploits the pupil position information that is provided by an eye tracker. Other correction methods use the electrooculogram (EOG) and/or the EEG to estimate ocular artifacts. Because both the EEG and the EOG recordings are susceptive to cerebral activity as well as to ocular activity, these other methods are at risk of overcorrecting the raw EEG. Pupil position information provides a reference that is linked to the ocular artifact in the EEG but that cannot be affected by cerebral activity, and as a result the new correction method avoids having to solve traditionally problematic issues like forward/backward propagation and evaluating the accuracy of component extraction. By using both simulated and experimental data, it is determined how pupil position influences the raw EEG and it is found that this relation is linear or quadratic. A Kalman filter is used for tuning of the parameters that specify the relation. On simulated data, the new method performs very well, resulting in an SNR after correction of over 10 dB for various patterns of eye movements. When compared to the three methods that performed best in the evaluation of Chapter 2, only the SOBI method which performed best in that evaluation shows similar results for some of the eye movement patterns. However, a serious limitation of the correction method is its inability to correct blink artifacts. In order to increase the variety of applications for which the new method can be used, the new correction should be improved in a way that enables it to correct the raw EEG for blinking artifacts. Chapter 4 deals with implementing such improvements based on the idea that a more advanced eye-tracker should be able to detect both the pupil position and the eyelid position. The improved eye tracker-based ocular artifact correction method is named EYE. Driven by some practical limitations regarding the eye tracking device currently available to us, an alternative way to estimate eyelid position is suggested, based on an EOG recorded above one eye. The EYE method can be used with both the eye tracker information or with the EOG substitute. On simulated data, accuracy of the EYE method is estimated using the EOGbased eyelid reference. This accuracy is again compared against the six other correction methods. Two different SNR-based measures of accuracy are proposed. One of these quantifies the correction of the entire simulated data set and the other focuses on those segments containing simulated blinking artifacts. After applying EYE, an average SNR of at least 9 dB for both these measures is achieved. This implies that the power of the corrected signal is at least eight times the power of the remaining noise. The simulated data sets contain a wide range of eye movements and blink frequencies. For almost all of these data sets, 16 out of 20, the correction results for EYE are better than for any of the other evaluated correction method. On experimental data, the EYE method appears to adequately correct for ocular artifacts as well. As the detection of eyelid position from the EOG is in principle inferior to the detection of eyelid position with the use of an eye tracker, these results should also be considered as an indicator of even higher accuracies that could be obtained with a more advanced eye tracker. Considering the simplicity of the MLR method, this method also performs remarkably well, which may explain why EOG-based regression is still often used for correction. In Chapter 5, the simulation model of Chapter 2 is put aside and, alternatively, experimentally recorded data is manipulated in a way that correction inaccuracies can be highlighted. Correction accuracies of eight correction methods, including EYE, are estimated based on data that are recorded during stop-signal tasks. In the analysis of these tasks it is essential that ocular artifacts are adequately removed because the task-related ERPs, are located mostly at frontal electrode positions and are low-amplitude. These data are corrected and subsequently evaluated. For the eight methods, the overall ranking of estimated accuracy in Figure 5.3, corresponds very well with the correction accuracy of these methods on simulated data as was found in Chapter 4. In a single-trial correction comparison, results suggest that the EYE corrected EEG, is not susceptible to overcorrection, whereas the other corrected EEGs are

    Comparing eye tracking with electrooculography for measuring individual sentence comprehension duration

    Get PDF
    The aim of this study was to validate a procedure for performing the audio-visual paradigm introduced by Wendt et al. (2015) with reduced practical challenges. The original paradigm records eye fixations using an eye tracker and calculates the duration of sentence comprehension based on a bootstrap procedure. In order to reduce practical challenges, we first reduced the measurement time by evaluating a smaller measurement set with fewer trials. The results of 16 listeners showed effects comparable to those obtained when testing the original full measurement set on a different collective of listeners. Secondly, we introduced electrooculography as an alternative technique for recording eye movements. The correlation between the results of the two recording techniques (eye tracker and electrooculography) was r = 0.97, indicating that both methods are suitable for estimating the processing duration of individual participants. Similar changes in processing duration arising from sentence complexity were found using the eye tracker and the electrooculography procedure. Thirdly, the time course of eye fixations was estimated with an alternative procedure, growth curve analysis, which is more commonly used in recent studies analyzing eye tracking data. The results of the growth curve analysis were compared with the results of the bootstrap procedure. Both analysis methods show similar processing durations

    Emoji Essence: Detecting User Emotional Response on Visual Centre Field with Emoticons

    Get PDF
    User experience is understood in so many ways, like a one on one interaction (subjective views), online surveys and questionnaires. This is simply so get the user’s implicit response, this paper demonstrates the underlying user emotion on a particular interface such as the webpage visual content based on the context of familiarisation to convey users’ emotion on the interface using emoji, we integrated physiological readings and eye movement behaviour to convey user emotion on the visual centre field of a web interface. The physiological reading is synchronised with the eye tracker to obtain correlating user interaction, and emoticons are used as a form of emotion conveyance on the interface. The eye movement prediction is obtained through a control system’s loop and is represented by different color display of gaze points (GT) that detects a particular user’s emotion on the webpage interface. These are interpreted by the emoticons. Result shows synchronised readings which correlates to area of interests (AOI) of the webpage and user emotion. These are prototypical instances of authentic user response execution for a computer interface and to easily identify user response without user subjective response for better and easy design decisions

    Evaluation of optimisation techniques for multiscopic rendering

    Get PDF
    A thesis submitted to the University of Bedfordshire in fulfilment of the requirements for the degree of Master of Science by ResearchThis project evaluates different performance optimisation techniques applied to stereoscopic and multiscopic rendering for interactive applications. The artefact features a robust plug-in package for the Unity game engine. The thesis provides background information for the performance optimisations, outlines all the findings, evaluates the optimisations and provides suggestions for future work. Scrum development methodology is used to develop the artefact and quantitative research methodology is used to evaluate the findings by measuring performance. This project concludes that the use of each performance optimisation has specific use case scenarios in which performance benefits. Foveated rendering provides greatest performance increase for both stereoscopic and multiscopic rendering but is also more computationally intensive as it requires an eye tracking solution. Dynamic resolution is very beneficial when overall frame rate smoothness is needed and frame drops are present. Depth optimisation is beneficial for vast open environments but can lead to decreased performance if used inappropriately
    corecore