2,201 research outputs found

    Effects of search intent on eye-movement patterns in a change detection task

    Get PDF
    The goal of the present study was to examine whether intention type affects eye movement patterns in a change detection task In addition, we assessed whether the eye movement index could be used to identify human implicit intent. We attempted to generate three types of intent amongst the study participants, dividing them into one of three conditions; each condition received different information regarding an impending change to the visual stimuli. In the “navigational intent” condition, participants were asked to look for any interesting objects, and were not given any more information about the impending change. In the “low-specific intent” condition, participants were informed that a change would occur. In the “high-specific intent” condition, participants were told that a change would occur, and that an object would disappear. In addition to this main change detection task, participants also had to perform a primary task, in which they were required to name aloud the colors of objects in the pre-change scene. This allowed us to control for the visual searching process during the pre-change scene. The main results were as follows: firstly, the primary task successfully controlled for the visual search process during the pre-change scene, establishing that there were no differences in the patterns of eye movements across all three conditions despite differing intents. Secondly, we observed significantly different patterns of eye movement between the conditions in the post-change scene, suggesting that generating a specific intent for change detection yields a distinctive pattern of eye-movements. Finally, discriminant function analysis showed a reasonable classification rate for identifying a specific intent. Taken together, it was found that both participant intent and the specificity of information provided to the participants affect eye movements in a change detection task

    EEG-Based Quantification of Cortical Current Density and Dynamic Causal Connectivity Generalized across Subjects Performing BCI-Monitored Cognitive Tasks.

    Get PDF
    Quantification of dynamic causal interactions among brain regions constitutes an important component of conducting research and developing applications in experimental and translational neuroscience. Furthermore, cortical networks with dynamic causal connectivity in brain-computer interface (BCI) applications offer a more comprehensive view of brain states implicated in behavior than do individual brain regions. However, models of cortical network dynamics are difficult to generalize across subjects because current electroencephalography (EEG) signal analysis techniques are limited in their ability to reliably localize sources across subjects. We propose an algorithmic and computational framework for identifying cortical networks across subjects in which dynamic causal connectivity is modeled among user-selected cortical regions of interest (ROIs). We demonstrate the strength of the proposed framework using a "reach/saccade to spatial target" cognitive task performed by 10 right-handed individuals. Modeling of causal cortical interactions was accomplished through measurement of cortical activity using (EEG), application of independent component clustering to identify cortical ROIs as network nodes, estimation of cortical current density using cortically constrained low resolution electromagnetic brain tomography (cLORETA), multivariate autoregressive (MVAR) modeling of representative cortical activity signals from each ROI, and quantification of the dynamic causal interaction among the identified ROIs using the Short-time direct Directed Transfer function (SdDTF). The resulting cortical network and the computed causal dynamics among its nodes exhibited physiologically plausible behavior, consistent with past results reported in the literature. This physiological plausibility of the results strengthens the framework's applicability in reliably capturing complex brain functionality, which is required by applications, such as diagnostics and BCI

    Automated task load detection with electroencephalography: towards passive brain–computer interfacing in robotic surgery

    Get PDF
    Automatic detection of the current task load of a surgeon in the theatre in real time could provide helpful information, to be used in supportive systems. For example, such information may enable the system to automatically support the surgeon when critical or stressful periods are detected, or to communicate to others when a surgeon is engaged in a complex maneuver and should not be disturbed. Passive brain–computer interfaces (BCI) infer changes in cognitive and affective state by monitoring and interpreting ongoing brain activity recorded via an electroencephalogram. The resulting information can then be used to automatically adapt a technological system to the human user. So far, passive BCI have mostly been investigated in laboratory settings, even though they are intended to be applied in real-world settings. In this study, a passive BCI was used to assess changes in task load of skilled surgeons performing both simple and complex surgical training tasks. Results indicate that the introduced methodology can reliably and continuously detect changes in task load in this realistic environment

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Combining brain-computer interfaces and assistive technologies: state-of-the-art and challenges

    Get PDF
    In recent years, new research has brought the field of EEG-based Brain-Computer Interfacing (BCI) out of its infancy and into a phase of relative maturity through many demonstrated prototypes such as brain-controlled wheelchairs, keyboards, and computer games. With this proof-of-concept phase in the past, the time is now ripe to focus on the development of practical BCI technologies that can be brought out of the lab and into real-world applications. In particular, we focus on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT). In pursuit of more practical BCIs for use outside of the lab, in this paper, we identify four application areas where disabled individuals could greatly benefit from advancements in BCI technology, namely,“Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. We review the current state of the art and possible future developments, while discussing the main research issues in these four areas. In particular, we expect the most progress in the development of technologies such as hybrid BCI architectures, user-machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human-computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices

    Attention is allocated closely ahead of the target during smooth pursuit eye movements: Evidence from EEG frequency tagging

    Get PDF
    It is under debate whether attention during smooth pursuit is centered right on the pursuit target or allocated preferentially ahead of it. Attentional deployment was previously probed using a secondary task, which might have altered attention allocation and led to inconsistent findings. We measured frequency-tagged steady-state visual evoked potentials (SSVEP) to measure attention allocation in the absence of any secondary probing task. The observers pursued a moving dot while stimuli flickering at different frequencies were presented at various locations ahead or behind the pursuit target. We observed a significant increase in EEG power at the flicker frequency of the stimulus in front of the pursuit target, compared to the frequency of the stimulus behind. When testing many different locations, we found that the enhancement was detectable up to about 1.5° ahead during pursuit, but vanished by 3.5°. In a control condition using attentional cueing during fixation, we did observe an enhanced EEG response to stimuli at this eccentricity, indicating that the focus of attention during pursuit is narrower than allowed for by the resolution of the attentional system. In a third experiment, we ruled out the possibility that the SSVEP enhancement was a byproduct of the catch-up saccades occurring during pursuit. Overall, we showed that attention is on average allocated ahead of the pursuit target during smooth pursuit. EEG frequency tagging seems to be a powerful technique that allows for the investigation of attention/perception implicitly when an overt task would be confounding

    Humanoid-based protocols to study social cognition

    Get PDF
    Social cognition is broadly defined as the way humans understand and process their interactions with other humans. In recent years, humans have become more and more used to interact with non-human agents, such as technological artifacts. Although these interactions have been restricted to human-controlled artifacts, they will soon include interactions with embodied and autonomous mechanical agents, i.e., robots. This challenge has motivated an area of research related to the investigation of human reactions towards robots, widely referred to as Human-Robot Interaction (HRI). Classical HRI protocols often rely on explicit measures, e.g., subjective reports. Therefore, they cannot address the quantification of the crucial implicit social cognitive processes that are evoked during an interaction. This thesis aims to develop a link between cognitive neuroscience and human-robot interaction (HRI) to study social cognition. This approach overcomes methodological constraints of both fields, allowing to trigger and capture the mechanisms of real-life social interactions while ensuring high experimental control. The present PhD work demonstrates this through the systematic study of the effect of online eye contact on gaze-mediated orienting of attention. The study presented in Publication I aims to adapt the gaze-cueing paradigm from cognitive science to an objective neuroscientific HRI protocol. Furthermore, it investigates whether the gaze-mediated orienting of attention is sensitive to the establishment of eye contact. The study replicates classic screen-based findings of attentional orienting mediated by gaze both at behavioral and neural levels, highlighting the feasibility and the scientific value of adding neuroscientific methods to HRI protocols. The aim of the study presented in Publication II is to examine whether and how real-time eye contact affects the dual-component model of joint attention orienting. To this end, cue validity and stimulus-to-onset asynchrony are also manipulated. The results show an interactive effect of strategic (cue validity) and social (eye contact) top-down components on the botton-up reflexive component of gaze-mediated orienting of attention. The study presented in Publication III aims to examine the subjective engagement and attribution of human likeness towards the robot depending on established eye contact or not during a joint attention task. Subjective reports show that eye contact increases human likeness attribution and feelings of engagement with the robot compared to a no-eye contact condition. The aim of the study presented in Publication IV is to investigate whether eye contact established by a humanoid robot affects objective measures of engagement (i.e. joint attention and fixation durations), and subjective feelings of engagement with the robot during a joint attention task. Results show that eye contact modulates attentional engagement, with longer fixations at the robot’s face and cueing effect when the robot establishes eye contact. In contrast, subjective reports show that the feeling of being engaged with the robot in an HRI protocol is not modulated by real-time eye contact. This study further supports the necessity for adding objective methods to HRI. Overall, this PhD work shows that embodied artificial agents can advance the theoretical knowledge of social cognitive mechanisms by serving as sophisticated interactive stimuli of high ecological validity and excellent experimental control. Moreover, humanoid-based protocols grounded in cognitive science can advance the HRI community by informing about the exact cognitive mechanisms that are present during HRI

    An analysis of EEG signals present during target search

    Get PDF
    Recent proof-of-concept research has appeared highlighting the applicability of using Brain Computer Interface (BCI) technology to utilise a subjects visual system to classify images. This technique involves classifying a users EEG (Electroencephalography) signals as they view images presented on a screen. The premise is that images (targets) that arouse a subjects attention generate distinct brain responses, and these brain responses can then be used to label the images. Research thus far in this domain has focused on examining the tasks and paradigms that can be used to elicit these neurologically informative signals from images, and the correlates of human perception that modulate them. While success has been shown in detecting these responses in high speed presentation paradigms, there is still an open question as to what search tasks can ultimately benefit from using an EEG based BCI system. In this thesis we explore: (1) the neural signals present during visual search tasks that require eye movements, and how they inform us of the possibilities for BCI applica- tions utilising eye tracking and EEG in combination with each other, (2) how temporal characteristics of eye movements can give indication of the suitability of a search task to being augmented by an EEG based BCI system, (3) the characteristics of a number of paradigms that can be used to elicit informative neural responses to drive image search BCI applications. In this thesis we demonstrate EEG signals can be used in a discriminative manner to label images. In addition, we find in certain instances, that signals derived from sources such as eye movements can yield significantly more discriminative information

    Co-adaptive control strategies in assistive Brain-Machine Interfaces

    Get PDF
    A large number of people with severe motor disabilities cannot access any of the available control inputs of current assistive products, which typically rely on residual motor functions. These patients are therefore unable to fully benefit from existent assistive technologies, including communication interfaces and assistive robotics. In this context, electroencephalography-based Brain-Machine Interfaces (BMIs) offer a potential non-invasive solution to exploit a non-muscular channel for communication and control of assistive robotic devices, such as a wheelchair, a telepresence robot, or a neuroprosthesis. Still, non-invasive BMIs currently suffer from limitations, such as lack of precision, robustness and comfort, which prevent their practical implementation in assistive technologies. The goal of this PhD research is to produce scientific and technical developments to advance the state of the art of assistive interfaces and service robotics based on BMI paradigms. Two main research paths to the design of effective control strategies were considered in this project. The first one is the design of hybrid systems, based on the combination of the BMI together with gaze control, which is a long-lasting motor function in many paralyzed patients. Such approach allows to increase the degrees of freedom available for the control. The second approach consists in the inclusion of adaptive techniques into the BMI design. This allows to transform robotic tools and devices into active assistants able to co-evolve with the user, and learn new rules of behavior to solve tasks, rather than passively executing external commands. Following these strategies, the contributions of this work can be categorized based on the typology of mental signal exploited for the control. These include: 1) the use of active signals for the development and implementation of hybrid eyetracking and BMI control policies, for both communication and control of robotic systems; 2) the exploitation of passive mental processes to increase the adaptability of an autonomous controller to the user\u2019s intention and psychophysiological state, in a reinforcement learning framework; 3) the integration of brain active and passive control signals, to achieve adaptation within the BMI architecture at the level of feature extraction and classification
    corecore