13,974 research outputs found

    Eye fixation related potentials in a target search task

    Get PDF
    Typically BCI (Brain Computer Interfaces) are found in rehabilitative or restorative applications, often allowing users a medium of communication that is otherwise unavailable through conventional means. Recently, however, there is growing interest in using BCI to assist users in searching for images. A class of neural signals often leveraged in common BCI paradigms are ERPs (Event Related Potentials), which are present in the EEG (Electroencephalograph) signals from users in response to various sensory events. One such ERP is the P300, and is typically elicited in an oddball experiment where a subject’s attention is orientated towards a deviant stimulus among a stream of presented images. It has been shown that these types of neural responses can be used to drive an image search or labeling task, where we can rank images by examining the presence of such ERP signals in response to the display of images. To date, systems like these have been demonstrated when presenting sequences of images containing targets at up to 10Hz, however, the target images in these tasks do not necessitate any kind of eye movement for their detection because the targets in the images are quite salient. In this paper we analyse the presence of discriminating signals when they are offset to the time of eye fixations in a visual search task where detection of target images does require eye fixations

    Looking for a face in the crowd: Fixation-related potentials in an eye-movement visual search task

    Get PDF
    Despite the compelling contribution of the study of event related potentials (ERPs) and eye movements to cognitive neuroscience, these two approaches have largely evolved independently. We designed an eye-movement visual search paradigm that allowed us to concurrently record EEG and eye movements while subjects were asked to find a hidden target face in a crowded scene with distractor faces. Fixation event-related potentials (fERPs) to target and distractor stimuli showed the emergence of robust sensory components associated with the perception of stimuli and cognitive components associated with the detection of target faces. We compared those components with the ones obtained in a control task at fixation: qualitative similarities as well as differences in terms of scalp topography and latency emerged between the two. By using single trial analyses, fixations to target and distractors could be decoded from the EEG signals above chance level in 11 out of 12 subjects. Our results show that EEG signatures related to cognitive behavior develop across spatially unconstrained exploration of natural scenes and provide a first step towards understanding the mechanisms of target detection during natural search.Fil: Kaunitz, Lisandro N.. University of Leicester; Reino UnidoFil: Kamienkowski, Juan Esteban. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Física. Laboratorio de Neurociencia Integrativa; Argentina. Universidad Diego Portales; Chile;Fil: Varatharajah, Alexander. University of Leicester; Reino UnidoFil: Sigman, Mariano. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Bahía Blanca. Instituto de Física del Sur; Argentina. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Física. Laboratorio de Neurociencia Integrativa; ArgentinaFil: Quian Quiroga, Rodrigo. University of Leicester; Reino UnidoFil: Ison, Matias Julian. University of Leicester; Reino Unid

    Atypical disengagement from faces and its modulation by the control of eye fixation in children with Autism Spectrum Disorder

    Get PDF
    By using the gap overlap task, we investigated disengagement from faces and objects in children (9–17 years old) with and without autism spectrum disorder (ASD) and its neurophysiological correlates. In typically developing (TD) children, faces elicited larger gap effect, an index of attentional engagement, and larger saccade-related event-related potentials (ERPs), compared to objects. In children with ASD, by contrast, neither gap effect nor ERPs differ between faces and objects. Follow-up experiments demonstrated that instructed fixation on the eyes induces larger gap effect for faces in children with ASD, whereas instructed fixation on the mouth can disrupt larger gap effect in TD children. These results suggest a critical role of eye fixation on attentional engagement to faces in both groups

    The cost of space independence in P300-BCI spellers.

    Get PDF
    Background: Though non-invasive EEG-based Brain Computer Interfaces (BCI) have been researched extensively over the last two decades, most designs require control of spatial attention and/or gaze on the part of the user. Methods: In healthy adults, we compared the offline performance of a space-independent P300-based BCI for spelling words using Rapid Serial Visual Presentation (RSVP), to the well-known space-dependent Matrix P300 speller. Results: EEG classifiability with the RSVP speller was as good as with the Matrix speller. While the Matrix speller’s performance was significantly reliant on early, gaze-dependent Visual Evoked Potentials (VEPs), the RSVP speller depended only on the space-independent P300b. However, there was a cost to true spatial independence: the RSVP speller was less efficient in terms of spelling speed. Conclusions: The advantage of space independence in the RSVP speller was concomitant with a marked reduction in spelling efficiency. Nevertheless, with key improvements to the RSVP design, truly space-independent BCIs could approach efficiencies on par with the Matrix speller. With sufficiently high letter spelling rates fused with predictive language modelling, they would be viable for potential applications with patients unable to direct overt visual gaze or covert attentional focus

    Electrophysiological Correlates of Visual Object Category Formation in a Prototype-Distortion Task

    Get PDF
    In perceptual learning studies, participants engage in extensive training in the discrimination of visual stimuli in order to modulate perceptual performance. Much of the literature in perceptual learning has looked at the induction of the reorganization of low-level representations in V1. However, much remains to be understood about the mechanisms behind how the adult brain (an expert in visual object categorization) extracts high-level visual objects from the environment and categorically represents them in the cortical visual hierarchy. Here, I used event-related potentials (ERPs) to investigate the neural mechanisms involved in object representation formation during a hybrid visual search and prototype distortion category learning task. EEG was continuously recorded while participants performed the hybrid task, in which a peripheral array of four dot patterns was briefly flashed on a computer screen. In half of the trials, one of the four dot patterns of the array contained the target, a distorted prototype pattern. The remaining trials contained only randomly generated patterns. After hundreds of trials, participants learned to discriminate the target pattern through corrective feedback. A multilevel modeling approach was used to examine the predictive relationship between behavioral performance over time and two ERP components, the N1 and the N250. The N1 is an early sensory component related to changes in visual attention and discrimination (Hopf et al., 2002; Vogel & Luck, 2000). The N250 is a component related to category learning and expertise (Krigolson et al., 2009; Scott et al., 2008; Tanaka et al., 2006). Results indicated that while N1 amplitudes did not change with improved performance, increasingly negative N250 amplitudes did develop over time and were predictive of improvements in pattern detection accuracy

    Eye-movements in implicit artificial grammar learning

    Get PDF
    Artificial grammar learning (AGL) has been probed with forced-choice behavioral tests (active tests). Recent attempts to probe the outcomes of learning (implicitly acquired knowledge) with eye-movement responses (passive tests) have shown null results. However, these latter studies have not tested for sensitivity effects, for example, increased eye movements on a printed violation. In this study, we tested for sensitivity effects in AGL tests with (Experiment 1) and without (Experiment 2) concurrent active tests (preference- and grammaticality classification) in an eye-tracking experiment. Eye movements discriminated between sequence types in passive tests and more so in active tests. The eye-movement profile did not differ between preference and grammaticality classification, and it resembled sensitivity effects commonly observed in natural syntax processing. Our findings show that the outcomes of implicit structured sequence learning can be characterized in eye tracking. More specifically, whole trial measures (dwell time, number of fixations) showed robust AGL effects, whereas first-pass measures (first-fixation duration) did not. Furthermore, our findings strengthen the link between artificial and natural syntax processing, and they shed light on the factors that determine performance differences in preference and grammaticality classification tests.Max Planck Institute for PsycholinguisticsDonders Institute for Brain, Cognition and BehaviorVetenskapsradetSwedish Dyslexia Foundatio

    Analyzing P300 Distractors for Target Reconstruction

    Full text link
    P300-based brain-computer interfaces (BCIs) are often trained per-user and per-application space. Training such models requires ground truth knowledge of target and non-target stimulus categories during model training, which imparts bias into the model. Additionally, not all non-targets are created equal; some may contain visual features that resemble targets or may otherwise be visually salient. Current research has indicated that non-target distractors may elicit attenuated P300 responses based on the perceptual similarity of these distractors to the target category. To minimize this bias, and enable a more nuanced analysis, we use a generalized BCI approach that is fit to neither user nor task. We do not seek to improve the overall accuracy of the BCI with our generalized approach; we instead demonstrate the utility of our approach for identifying target-related image features. When combined with other intelligent agents, such as computer vision systems, the performance of the generalized model equals that of the user-specific models, without any user specific data.Comment: 4 pages, 3 figure

    Testing the limits of contextual constraint: interactions with word frequency and parafoveal preview during fluent reading

    Get PDF
    Contextual constraint is a key factor affecting a word's fixation duration and its likelihood of being fixated during reading. Previous research has generally demonstrated additive effects of predictability and frequency in fixation times. Studies examining the role of parafoveal preview have shown that greater preview benefit is obtained from more predictable and higher frequency words versus less predictable and lower frequency words. In two experiments, we investigated effects of target word predictability, frequency, and parafoveal preview. A 3 (Predictability: low, medium, high) × 2 (Frequency: low, high) design was used with Preview (valid, invalid) manipulated between experiments. With valid previews, we found main effects of Predictability and Frequency in both fixation time and probability measures, including an interaction in early fixation measures. With invalid preview, we again found main effects of Predictability and Frequency in fixation times, but no evidence of an interaction. Fixation probability showed a weak Predictability effect and Predictability-Frequency interaction. Predictability interacted with Preview in early fixation time and probability measures. Our findings suggest that high levels of contextual constraint exert an early influence during lexical processing in reading. Results are discussed in terms of models of language processing and eye movement control

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
    corecore