99 research outputs found

    An Internet-Based Real-Time Audiovisual Link for Dual MEG Recordings

    Get PDF
    Hyperscanning Most neuroimaging studies of human social cognition have focused on brain activity of single subjects. More recently, "two-person neuroimaging" has been introduced, with simultaneous recordings of brain signals from two subjects involved in social interaction. These simultaneous "hyperscanning" recordings have already been carried out with a spectrum of neuroimaging modalities, such as functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and functional near-infrared spectroscopy (fNIRS). Dual MEG Setup We have recently developed a setup for simultaneous magnetoencephalographic (MEG) recordings of two subjects that communicate in real time over an audio link between two geographically separated MEG laboratories. Here we present an extended version of the setup, where we have added a video connection and replaced the telephone-landline-based link with an Internet connection. Our setup enabled transmission of video and audio streams between the sites with a one-way communication latency of about 130 ms. Our software that allows reproducing the setup is publicly available. Validation We demonstrate that the audiovisual Internet-based link can mediate real-time interaction between two subjects who try to mirror each others' hand movements that they can see via the video link. All the nine pairs were able to synchronize their behavior. In addition to the video, we captured the subjects' movements with accelerometers attached to their index fingers; we determined from these signals that the average synchronization accuracy was 215 ms. In one subject pair we demonstrate inter-subject coherence patterns of the MEG signals that peak over the sensorimotor areas contralateral to the hand used in the task.Peer reviewe

    Voice-selective prediction alterations in nonclinical voice hearers

    Get PDF
    Auditory verbal hallucinations (AVH) are a cardinal symptom of psychosis but also occur in 6-13% of the general population. Voice perception is thought to engage an internal forward model that generates predictions, preparing the auditory cortex for upcoming sensory feedback. Impaired processing of sensory feedback in vocalization seems to underlie the experience of AVH in psychosis, but whether this is the case in nonclinical voice hearers remains unclear. The current study used electroencephalography (EEG) to investigate whether and how hallucination predisposition (HP) modulates the internal forward model in response to self-initiated tones and self-voices. Participants varying in HP (based on the Launay-Slade Hallucination Scale) listened to self-generated and externally generated tones or self-voices. HP did not affect responses to self vs. externally generated tones. However, HP altered the processing of the self-generated voice: increased HP was associated with increased pre-stimulus alpha power and increased N1 response to the self-generated voice. HP did not affect the P2 response to voices. These findings confirm that both prediction and comparison of predicted and perceived feedback to a self-generated voice are altered in individuals with AVH predisposition. Specific alterations in the processing of self-generated vocalizations may establish a core feature of the psychosis continuum.The Authors gratefully acknowledge all the participants who collaborated in the study, and particularly Dr. Franziska Knolle for feedback on stimulus generation, Carla Barros for help with scripts for EEG time-frequency analysis, and Dr. Celia Moreira for her advice on mixed linear models. This work was supported by the Portuguese Science National Foundation (FCT; grant numbers PTDC/PSI-PCL/116626/2010, IF/00334/2012, PTDC/MHCPCN/0101/2014) awarded to APP

    Active inference, sensory attenuation and illusions.

    Get PDF
    Active inference provides a simple and neurobiologically plausible account of how action and perception are coupled in producing (Bayes) optimal behaviour. This can be seen most easily as minimising prediction error: we can either change our predictions to explain sensory input through perception. Alternatively, we can actively change sensory input to fulfil our predictions. In active inference, this action is mediated by classical reflex arcs that minimise proprioceptive prediction error created by descending proprioceptive predictions. However, this creates a conflict between action and perception; in that, self-generated movements require predictions to override the sensory evidence that one is not actually moving. However, ignoring sensory evidence means that externally generated sensations will not be perceived. Conversely, attending to (proprioceptive and somatosensory) sensations enables the detection of externally generated events but precludes generation of actions. This conflict can be resolved by attenuating the precision of sensory evidence during movement or, equivalently, attending away from the consequences of self-made acts. We propose that this Bayes optimal withdrawal of precise sensory evidence during movement is the cause of psychophysical sensory attenuation. Furthermore, it explains the force-matching illusion and reproduces empirical results almost exactly. Finally, if attenuation is removed, the force-matching illusion disappears and false (delusional) inferences about agency emerge. This is important, given the negative correlation between sensory attenuation and delusional beliefs in normal subjects--and the reduction in the magnitude of the illusion in schizophrenia. Active inference therefore links the neuromodulatory optimisation of precision to sensory attenuation and illusory phenomena during the attribution of agency in normal subjects. It also provides a functional account of deficits in syndromes characterised by false inference and impaired movement--like schizophrenia and Parkinsonism--syndromes that implicate abnormal modulatory neurotransmission

    The effects of stimulus complexity on the preattentive processing of self-generated and nonself voices: an ERP study

    Get PDF
    The ability to differentiate one's own voice from the voice of somebody else plays a critical role in successful verbal self-monitoring processes and in communication. However, most of the existing studies have only focused on the sensory correlates of self-generated voice processing, whereas the effects of attentional demands and stimulus complexity on self-generated voice processing remain largely unknown. In this study, we investigated the effects of stimulus complexity on the preattentive processing of self and nonself voice stimuli. Event-related potentials (ERPs) were recorded from 17 healthy males who watched a silent movie while ignoring prerecorded self-generated (SGV) and nonself (NSV) voice stimuli, consisting of a vocalization (vocalization category condition: VCC) or of a disyllabic word (word category condition: WCC). All voice stimuli were presented as standard and deviant events in four distinct oddball sequences. The mismatch negativity (MMN) ERP component peaked earlier for NSV than for SGV stimuli. Moreover, when compared with SGV stimuli, the P3a amplitude was increased for NSV stimuli in the VCC only, whereas in the WCC no significant differences were found between the two voice types. These findings suggest differences in the time course of automatic detection of a change in voice identity. In addition, they suggest that stimulus complexity modulates the magnitude of the orienting response to SGV and NSV stimuli, extending previous findings on self-voice processing.This work was supported by Grant Numbers IF/00334/2012, PTDC/PSI-PCL/116626/2010, and PTDC/MHN-PCN/3606/2012, funded by the Fundacao para a Ciencia e a Tecnologia (FCT, Portugal) and the Fundo Europeu de Desenvolvimento Regional through the European programs Quadro de Referencia Estrategico Nacional and Programa Operacional Factores de Competitividade, awarded to A.P.P., and by FCT Doctoral Grant Number SFRH/BD/77681/2011, awarded to T.C.info:eu-repo/semantics/publishedVersio

    Distract yourself: prediction of salient distractors by own actions and external cues.

    Get PDF
    Distracting sensory events can capture attention, interfering with the performance of the task at hand. We asked: is our attention captured by such events if we cause them ourselves? To examine this, we employed a visual search task with an additional salient singleton distractor, where the distractor was predictable either by the participant's own (motor) action or by an endogenous cue; accordingly, the task was designed to isolate the influence of motor and non-motor predictive processes. We found both types of prediction, cue- and action-based, to attenuate the interference of the distractor-which is at odds with the "attentional white bear" hypothesis, which states that prediction of distracting stimuli mandatorily directs attention towards them. Further, there was no difference between the two types of prediction. We suggest this pattern of results may be better explained by theories postulating general predictive mechanisms, such as the framework of predictive processing, as compared to accounts proposing a special role of action-effect prediction, such as theories based on optimal motor control. However, rather than permitting a definitive decision between competing theories, our study highlights a number of open questions, to be answered by these theories, with regard to how exogenous attention is influenced by predictions deriving from the environment versus our own actions

    Self-generated sounds of locomotion and ventilation and the evolution of human rhythmic abilities

    Get PDF

    The Changing Face of the Epidemiology of Tuberculosis due to Molecular Strain Typing: A Review

    Full text link

    Processing of self-initiated sounds: Evidence from EEG studies

    No full text
    corecore