1,888 research outputs found

    Atypical disengagement from faces and its modulation by the control of eye fixation in children with Autism Spectrum Disorder

    Get PDF
    By using the gap overlap task, we investigated disengagement from faces and objects in children (9–17 years old) with and without autism spectrum disorder (ASD) and its neurophysiological correlates. In typically developing (TD) children, faces elicited larger gap effect, an index of attentional engagement, and larger saccade-related event-related potentials (ERPs), compared to objects. In children with ASD, by contrast, neither gap effect nor ERPs differ between faces and objects. Follow-up experiments demonstrated that instructed fixation on the eyes induces larger gap effect for faces in children with ASD, whereas instructed fixation on the mouth can disrupt larger gap effect in TD children. These results suggest a critical role of eye fixation on attentional engagement to faces in both groups

    A retinotopic attentional trace after saccadic eye movements: evidence from event-related potentials

    Get PDF
    Saccadic eye movements are a major source of disruption to visual stability, yet we experience little of this disruption. We can keep track of the same object across multiple saccades. It is generally assumed that visual stability is due to the process of remapping, in which retinotopically organized maps are updated to compensate for the retinal shifts caused by eye movements. Recent behavioral and ERP evidence suggests that visual attention is also remapped, but that it may still leave a residual retinotopic trace immediately after a saccade. The current study was designed to further examine electrophysiological evidence for such a retinotopic trace by recording ERPs elicited by stimuli that were presented immediately after a saccade (80 msec SOA). Participants were required to maintain attention at a specific location (and to memorize this location) while making a saccadic eye movement. Immediately after the saccade, a visual stimulus was briefly presented at either the attended location (the same spatiotopic location), a location that matched the attended location retinotopically (the same retinotopic location), or one of two control locations. ERP data revealed an enhanced P1 amplitude for the stimulus presented at the retinotopically matched location, but a significant attenuation for probes presented at the original attended location. These results are consistent with the hypothesis that visuospatial attention lingers in retinotopic coordinates immediately following gaze shifts

    A Method for Detection and Classification of Events in Neural Activity

    Get PDF
    We present a method for the real time prediction of punctuate events in neural activity, based on the time-frequency spectrum of the signal, applicable both to continuous processes like local field potentials (LFPs) as well as to spike trains. We test it on recordings of LFP and spiking activity acquired previously from the lateral intraparietal area (LIP) of macaque monkeys performing a memory-saccade task. In contrast to earlier work, where trials with known start times were classified, our method detects and classifies trials directly from the data. It provides a means to quantitatively compare and contrast the content of LFP signals and spike trains: we find that the detector performance based on the LFP matches the performance based on spike rates. The method should find application in the development of neural prosthetics based on the LFP signal. Our approach uses a new feature vector, which we call the 2d cepstrum

    Temporal structure in neuronal activity during working memory in Macaque parietal cortex

    Full text link
    A number of cortical structures are reported to have elevated single unit firing rates sustained throughout the memory period of a working memory task. How the nervous system forms and maintains these memories is unknown but reverberating neuronal network activity is thought to be important. We studied the temporal structure of single unit (SU) activity and simultaneously recorded local field potential (LFP) activity from area LIP in the inferior parietal lobe of two awake macaques during a memory-saccade task. Using multitaper techniques for spectral analysis, which play an important role in obtaining the present results, we find elevations in spectral power in a 50--90 Hz (gamma) frequency band during the memory period in both SU and LFP activity. The activity is tuned to the direction of the saccade providing evidence for temporal structure that codes for movement plans during working memory. We also find SU and LFP activity are coherent during the memory period in the 50--90 Hz gamma band and no consistent relation is present during simple fixation. Finally, we find organized LFP activity in a 15--25 Hz frequency band that may be related to movement execution and preparatory aspects of the task. Neuronal activity could be used to control a neural prosthesis but SU activity can be hard to isolate with cortical implants. As the LFP is easier to acquire than SU activity, our finding of rich temporal structure in LFP activity related to movement planning and execution may accelerate the development of this medical application.Comment: Originally submitted to the neuro-sys archive which was never publicly announced (was 0005002

    EEG-Based Quantification of Cortical Current Density and Dynamic Causal Connectivity Generalized across Subjects Performing BCI-Monitored Cognitive Tasks.

    Get PDF
    Quantification of dynamic causal interactions among brain regions constitutes an important component of conducting research and developing applications in experimental and translational neuroscience. Furthermore, cortical networks with dynamic causal connectivity in brain-computer interface (BCI) applications offer a more comprehensive view of brain states implicated in behavior than do individual brain regions. However, models of cortical network dynamics are difficult to generalize across subjects because current electroencephalography (EEG) signal analysis techniques are limited in their ability to reliably localize sources across subjects. We propose an algorithmic and computational framework for identifying cortical networks across subjects in which dynamic causal connectivity is modeled among user-selected cortical regions of interest (ROIs). We demonstrate the strength of the proposed framework using a "reach/saccade to spatial target" cognitive task performed by 10 right-handed individuals. Modeling of causal cortical interactions was accomplished through measurement of cortical activity using (EEG), application of independent component clustering to identify cortical ROIs as network nodes, estimation of cortical current density using cortically constrained low resolution electromagnetic brain tomography (cLORETA), multivariate autoregressive (MVAR) modeling of representative cortical activity signals from each ROI, and quantification of the dynamic causal interaction among the identified ROIs using the Short-time direct Directed Transfer function (SdDTF). The resulting cortical network and the computed causal dynamics among its nodes exhibited physiologically plausible behavior, consistent with past results reported in the literature. This physiological plausibility of the results strengthens the framework's applicability in reliably capturing complex brain functionality, which is required by applications, such as diagnostics and BCI

    Parsing a mental program: fixation-related brain signatures of unitary operations and routines in natural visual search

    Get PDF
    Visual search involves a sequence or routine of unitary operations (i.e. fixations) embedded in a larger mental global program. The process can indeed be seen as a program based on a while loop (while the target is not found), a conditional construct (whether the target is matched or not based on specific recognition algorithms) and a decision making step to determine the position of the next searched location based on existent evidence. Recent developments in our ability to co-register brain scalp potentials (EEG) during free eye movements has allowed investigating brain responses related to fixations (fixation-Related Potentials; fERPs), including the identification of sensory and cognitive local EEG components linked to individual fixations. However, the way in which the mental program guiding the search unfolds has not yet been investigated. We performed an EEG and eye tracking co-registration experiment in which participants searched for a target face in natural images of crowds. Here we show how unitary steps of the program are encoded by specific local target detection signatures and how the positioning of each unitary operation within the global search program can be pinpointed by changes in the EEG signal amplitude as well as the signal power in different frequency bands. By simultaneously studying brain signatures of unitary operations and those occurring during the sequence of fixations, our study sheds light into how local and global properties are combined in implementing visual routines in natural tasks

    (Micro)saccade-related potentials during face recognition:A study combining EEG, eye-tracking, and deconvolution modeling

    Get PDF
    Under natural viewing conditions, complex stimuli such as human faces are typically looked at several times in succession, implying that their recognition may unfold across multiple eye fixations. Although electrophysiological (EEG) experiments on face recognition typically prohibit eye movements, participants still execute frequent (micro)saccades on the face, each of which generates its own visuocortical response. This finding raises the question of whether the fixation-related potentials (FRPs) evoked by these tiny gaze shifts also contain psychologically valuable information about face processing. Here we investigated this question by co-recording EEG and eye movements in an experiment with emotional faces (happy, angry, neutral). Deconvolution modeling was used to separate the stimulus-ERPs to face onset from the FRPs generated by subsequent microsaccades-induced refixations on the face. As expected, stimulus-ERPs exhibited typical emotion effects, with a larger early posterior negativity (EPN) for happy/angry compared to neutral faces. Eye-tracking confirmed that participants made small saccades within the face in 98% of the trials. However, while each saccade produced a strong response over visual areas, this response was unaffected by the face’s emotional expression, both for the first and for subsequent (micro)saccades. This finding suggests that the face’s affective content is rapidly evaluated after stimulus onset, leading to only a short-lived sensory enhancement by arousing stimuli that does not repeat itself during immediate refixations. Methodologically, our work demonstrates how eye-tracking and deconvolution modeling can be used to extract several brain responses from each EEG trial, providing insights into neural processing at different latencies after stimulus onset
    corecore