14,964 research outputs found

    GraFIX: a semiautomatic approach for parsing low- and high-quality eye-tracking data

    Get PDF
    Fixation durations (FD) have been used widely as a measurement of information processing and attention. However, issues like data quality can seriously influence the accuracy of the fixation detection methods and, thus, affect the validity of our results (Holmqvist, Nyström, & Mulvey, 2012). This is crucial when studying special populations such as infants, where common issues with testing (e.g., high degree of movement, unreliable eye detection, low spatial precision) result in highly variable data quality and render existing FD detection approaches highly time consuming (hand-coding) or imprecise (automatic detection). To address this problem, we present GraFIX, a novel semiautomatic method consisting of a two-step process in which eye-tracking data is initially parsed by using velocity-based algorithms whose input parameters are adapted by the user and then manipulated using the graphical interface, allowing accurate and rapid adjustments of the algorithms’ outcome. The present algorithms (1) smooth the raw data, (2) interpolate missing data points, and (3) apply a number of criteria to automatically evaluate and remove artifactual fixations. The input parameters (e.g., velocity threshold, interpolation latency) can be easily manually adapted to fit each participant. Furthermore, the present application includes visualization tools that facilitate the manual coding of fixations. We assessed this method by performing an intercoder reliability analysis in two groups of infants presenting low- and high-quality data and compared it with previous methods. Results revealed that our two-step approach with adaptable FD detection criteria gives rise to more reliable and stable measures in low- and high-quality data

    Incorporating Feedback from Multiple Sensory Modalities Enhances Brain–Machine Interface Control

    Get PDF
    The brain typically uses a rich supply of feedback from multiple sensory modalities to control movement in healthy individuals. In many individuals, these afferent pathways, as well as their efferent counterparts, are compromised by disease or injury resulting in significant impairments and reduced quality of life. Brain–machine interfaces (BMIs) offer the promise of recovered functionality to these individuals by allowing them to control a device using their thoughts. Most current BMI implementations use visual feedback for closed-loop control; however, it has been suggested that the inclusion of additional feedback modalities may lead to improvements in control. We demonstrate for the first time that kinesthetic feedback can be used together with vision to significantly improve control of a cursor driven by neural activity of the primary motor cortex (MI). Using an exoskeletal robot, the monkey\u27s arm was moved to passively follow a cortically controlled visual cursor, thereby providing the monkey with kinesthetic information about the motion of the cursor. When visual and proprioceptive feedback were congruent, both the time to successfully reach a target decreased and the cursor paths became straighter, compared with incongruent feedback conditions. This enhanced performance was accompanied by a significant increase in the amount of movement-related information contained in the spiking activity of neurons in MI. These findings suggest that BMI control can be significantly improved in paralyzed patients with residual kinesthetic sense and provide the groundwork for augmenting cortically controlled BMIs with multiple forms of natural or surrogate sensory feedback

    Discovering Gender Differences in Facial Emotion Recognition via Implicit Behavioral Cues

    Full text link
    We examine the utility of implicit behavioral cues in the form of EEG brain signals and eye movements for gender recognition (GR) and emotion recognition (ER). Specifically, the examined cues are acquired via low-cost, off-the-shelf sensors. We asked 28 viewers (14 female) to recognize emotions from unoccluded (no mask) as well as partially occluded (eye and mouth masked) emotive faces. Obtained experimental results reveal that (a) reliable GR and ER is achievable with EEG and eye features, (b) differential cognitive processing especially for negative emotions is observed for males and females and (c) some of these cognitive differences manifest under partial face occlusion, as typified by the eye and mouth mask conditions.Comment: To be published in the Proceedings of Seventh International Conference on Affective Computing and Intelligent Interaction.201
    • …
    corecore