37 research outputs found
A Hardware-Based Configurable Algorithm for Eye Blink Signal Detection Using a Single-Channel BCI Headset
Eye blink artifacts in electroencephalographic (EEG) signals have been used in multiple applications as an effective method for human-computer interaction. Hence, an effective and low-cost blinking detection method would be an invaluable aid for the development of this technology. A configurable hardware algorithm, described using hardware description language, for eye blink detection based on EEG signals from a one-channel brain-computer interface (BCI) headset was developed and implemented, showing better performance in terms of effectiveness and detection time than manufacturer-provided software
On validating a generic camera-based blink detection system for cognitive load assessment
Detecting the human operator\u27s cognitive state is paramount in settings wherein maintaining optimal workload is necessary for task performance. Blink rate is an established metric of cognitive load, with a higher blink frequency being observed under conditions of greater workload. Measuring blink rate requires the use of eye-trackers which limits the adoption of this metric in the real-world. The authors aim to investigate the effectiveness of using a generic camera-based system as a way to assess the user\u27s cognitive load during a computer task. Participants completed a mental task while sitting in front of a computer. Blink rate was recorded via both the generic camera-based system and a scientific-grade eye-tracker for validation purposes. Cognitive load was also assessed through the performance in a single stimulus detection task. The blink rate recorded via the generic camera-based approach did not differ from the one obtained through the eye-tracker. No meaningful changes in blink rate were however observed with increasing cognitive load. Results show the generic-camera based system may represent a more affordable, ubiquitous means for assessing cognitive workload during computer task. Future work should further investigate ways to increase its accuracy during the completion of more realistic tasks
Deep into the Eyes: Applying Machine Learning to improve Eye-Tracking
Eye-tracking has been an active research area with applications in personal and behav- ioral studies, medical diagnosis, virtual reality, and mixed reality applications. Improving the robustness, generalizability, accuracy, and precision of eye-trackers while maintaining privacy is crucial. Unfortunately, many existing low-cost portable commercial eye trackers suffer from signal artifacts and a low signal-to-noise ratio. These trackers are highly depen- dent on low-level features such as pupil edges or diffused bright spots in order to precisely localize the pupil and corneal reflection. As a result, they are not reliable for studying eye movements that require high precision, such as microsaccades, smooth pursuit, and ver- gence. Additionally, these methods suffer from reflective artifacts, occlusion of the pupil boundary by the eyelid and often require a manual update of person-dependent parame- ters to identify the pupil region. In this dissertation, I demonstrate (I) a new method to improve precision while maintaining the accuracy of head-fixed eye trackers by combin- ing velocity information from iris textures across frames with position information, (II) a generalized semantic segmentation framework for identifying eye regions with a further extension to identify ellipse fits on the pupil and iris, (III) a data-driven rendering pipeline to generate a temporally contiguous synthetic dataset for use in many eye-tracking ap- plications, and (IV) a novel strategy to preserve privacy in eye videos captured as part of the eye-tracking process. My work also provides the foundation for future research by addressing critical questions like the suitability of using synthetic datasets to improve eye-tracking performance in real-world applications, and ways to improve the precision of future commercial eye trackers with improved camera specifications
Talent Identification and Development in Sports Performance
The identification and development of talent have always been a relevant topic in sports performance. In fact, a significant body of research is available worldwide discussing this longitudinal process, the qualities that underpin elite sports performance, and how coaches can facilitate the developmental process of talented athletes. Despite the continued interest given to issues of talent identification and development, recent literature highlights the low predictive value of applied and theoretical talent identification models.
Talent is the expression of a complex and multidimensional phenomenon, where, despite the existing practical recommendations, many coaches and stakeholders continue to fail to adequately value the distinction between growth, maturation, and training age. Technological resources have enabled important advances, however, this has been limited essentially to defining or validating motor skills variables or genetic markers that characterize the most talented athletes. Emerging technological resources and recent methodological advances are enabling integrated assessment and monitoring to include maturational, physiological, biomechanical, and perceptual skills while also creating optimal environments for performance and dealing with injury prevention and recovery
Multimodal Human Eye Blink Recognition Using Z-score Based Thresholding and Weighted Features
A novel real-time multimodal eye blink detection method using an amalgam of five unique weighted features extracted from the circle boundary formed from the eye landmarks is proposed. The five features, namely (Vertical Head Positioning, Orientation Factor, Proportional Ratio, Area of Intersection, and Upper Eyelid Radius), provide imperative gen (z score threshold) accurately predicting the eye status and thus the blinking status. An accurate and precise algorithm employing the five weighted features is proposed to predict eye status (open/close). One state-of-the-art dataset ZJU (eye-blink), is used to measure the performance of the method. Precision, recall, F1-score, and ROC curve measure the proposed method performance qualitatively and quantitatively. Increased accuracy (of around 97.2%) and precision (97.4%) are obtained compared to other existing unimodal approaches. The efficiency of the proposed method is shown to outperform the state-of-the-art methods
Signal Processing Using Non-invasive Physiological Sensors
Non-invasive biomedical sensors for monitoring physiological parameters from the human body for potential future therapies and healthcare solutions. Today, a critical factor in providing a cost-effective healthcare system is improving patients' quality of life and mobility, which can be achieved by developing non-invasive sensor systems, which can then be deployed in point of care, used at home or integrated into wearable devices for long-term data collection. Another factor that plays an integral part in a cost-effective healthcare system is the signal processing of the data recorded with non-invasive biomedical sensors. In this book, we aimed to attract researchers who are interested in the application of signal processing methods to different biomedical signals, such as an electroencephalogram (EEG), electromyogram (EMG), functional near-infrared spectroscopy (fNIRS), electrocardiogram (ECG), galvanic skin response, pulse oximetry, photoplethysmogram (PPG), etc. We encouraged new signal processing methods or the use of existing signal processing methods for its novel application in physiological signals to help healthcare providers make better decisions
Assessment of Human Behavior in Virtual Reality by Eye Tracking
Virtual reality (VR) is not a new technology but has been in development for decades, driven by advances in computer technology such as computer graphics, simulation, visualization, hardware and software, and human-computer interaction. Currently, VR technology is increasingly being used in applications to enable immersive, yet controlled research settings. Education and entertainment are two important application areas, where VR has been considered a key enabler of immersive experiences and their further advancement. At the same time, the study of human behavior in such innovative environments is expected to contribute to a better design of VR applications. Therefore, modern VR devices are consistently equipped with eye-tracking technology, enabling thus further studies of human behavior through the collection of process data. In particular, eye-tracking technology in combination with machine learning techniques and explainable models can provide new insights for a deeper understanding of human behavior during immersion in virtual environments.
In this work, a systematic computational framework based on eye-tracking and behavioral user data and state-of-the-art machine learning approaches is proposed to understand human behavior and individual differences in VR contexts. This computational framework is then employed in three user studies across two different domains, namely education, and entertainment. In the educational domain, the exploration of human behavior during educational activities is a timely and challenging question that can only be addressed in an interdisciplinary setting, to which educational VR platforms such as immersive VR classrooms can contribute. In this way, two different immersive VR classrooms were created where students can learn computational thinking skills and teachers can train in classroom management. Students' and teachers' visual perception and cognitive processing behaviors are investigated using eye-tracking data and machine learning techniques in combination with explainable models. Results show that eye movements reveal different human behaviors as well as individual differences during immersion in VR, providing important insights for immersive and effective VR classroom design. In terms of VR entertainment, eye movements open a new avenue to evaluate VR locomotion techniques from the perspective of user cognitive load and user experience using machine learning methods. Research in two domains demonstrates the effectiveness of eye movements as a proxy for evaluating human behavior in educational and entertainment VR contexts. In summary, this work paves the way for assessing human behavior in VR scenarios and provides profound insights into the way of designing, evaluating, and improving interactive VR systems. In particular, more effective and customizable virtual environments can be created to provide users with tailored experiences
Advanced bioelectrical signal processing methods: Past, present and future approach - Part III: Other biosignals
Analysis of biomedical signals is a very challenging task involving implementation of various advanced signal processing methods. This area is rapidly developing. This paper is a Part III paper, where the most popular and efficient digital signal processing methods are presented. This paper covers the following bioelectrical signals and their processing methods: electromyography (EMG), electroneurography (ENG), electrogastrography (EGG), electrooculography (EOG), electroretinography (ERG), and electrohysterography (EHG).Web of Science2118art. no. 606
Saccade Planning and Execution by the Lateral and Medial Cerebellum
In dit proefschrift wordt beschreven hoe het cerebellum saccadische oogbewegingen plant en uitvoert. De resultaten van metingen aan the neuronen in het cerebellum van rhesus makaken geven inzicht in welke processen ten grondslag liggen aan dit type oogbeweging