64 research outputs found

    Simply longer is not better: reversal of theta burst after-effect with prolonged stimulation

    Get PDF
    From all rTMS protocols at present, the theta burst stimulation (TBS) is considered the most efficient in terms of number of impulses and intensity required during a given stimulation. The aim of this study was to investigate the effects of inhibitory and excitatory TBS protocols on motor cortex excitability when the duration of stimulation was doubled. Fourteen healthy volunteers were tested under four conditions: intermittent theta bust stimulation (iTBS), continuous theta burst stimulation (cTBS), prolonged intermittent theta bust stimulation (ProiTBS) and prolonged continuous theta burst stimulation (ProcTBS). The prolonged paradigms were twice as long as the conventional TBS protocols. Conventional facilitatory iTBS converted into inhibitory when it was applied for twice as long, while the normally inhibitory cTBS became facilitatory when the stimulation duration was doubled. Our results show that TBS-induced plasticity cannot be deliberately enhanced simply by prolonging TBS protocols. Instead, when stimulating too long, after-effects will be reversed. This finding supplements findings at the short end of the stimulation duration range, where it was shown that conventional cTBS is excitatory in the first half and switches to inhibition only after the full length protocol. It is relevant for clinical applications for which an ongoing need for further protocol improvement is imminent

    Neural Basis of Self and Other Representation in Autism: An fMRI Study of Self-Face Recognition

    Get PDF
    Autism is a developmental disorder characterized by decreased interest and engagement in social interactions and by enhanced self-focus. While previous theoretical approaches to understanding autism have emphasized social impairments and altered interpersonal interactions, there is a recent shift towards understanding the nature of the representation of the self in individuals with autism spectrum disorders (ASD). Still, the neural mechanisms subserving self-representations in ASD are relatively unexplored.We used event-related fMRI to investigate brain responsiveness to images of the subjects' own face and to faces of others. Children with ASD and typically developing (TD) children viewed randomly presented digital morphs between their own face and a gender-matched other face, and made "self/other" judgments. Both groups of children activated a right premotor/prefrontal system when identifying images containing a greater percentage of the self face. However, while TD children showed activation of this system during both self- and other-processing, children with ASD only recruited this system while viewing images containing mostly their own face.This functional dissociation between the representation of self versus others points to a potential neural substrate for the characteristic self-focus and decreased social understanding exhibited by these individuals, and suggests that individuals with ASD lack the shared neural representations for self and others that TD children and adults possess and may use to understand others

    How Bodies and Voices Interact in Early Emotion Perception

    Get PDF
    Successful social communication draws strongly on the correct interpretation of others' body and vocal expressions. Both can provide emotional information and often occur simultaneously. Yet their interplay has hardly been studied. Using electroencephalography, we investigated the temporal development underlying their neural interaction in auditory and visual perception. In particular, we tested whether this interaction qualifies as true integration following multisensory integration principles such as inverse effectiveness. Emotional vocalizations were embedded in either low or high levels of noise and presented with or without video clips of matching emotional body expressions. In both, high and low noise conditions, a reduction in auditory N100 amplitude was observed for audiovisual stimuli. However, only under high noise, the N100 peaked earlier in the audiovisual than the auditory condition, suggesting facilitatory effects as predicted by the inverse effectiveness principle. Similarly, we observed earlier N100 peaks in response to emotional compared to neutral audiovisual stimuli. This was not the case in the unimodal auditory condition. Furthermore, suppression of beta–band oscillations (15–25 Hz) primarily reflecting biological motion perception was modulated 200–400 ms after the vocalization. While larger differences in suppression between audiovisual and audio stimuli in high compared to low noise levels were found for emotional stimuli, no such difference was observed for neutral stimuli. This observation is in accordance with the inverse effectiveness principle and suggests a modulation of integration by emotional content. Overall, results show that ecologically valid, complex stimuli such as joined body and vocal expressions are effectively integrated very early in processing

    Telerobotic Pointing Gestures Shape Human Spatial Cognition

    Full text link
    This paper aimed to explore whether human beings can understand gestures produced by telepresence robots. If it were the case, they can derive meaning conveyed in telerobotic gestures when processing spatial information. We conducted two experiments over Skype in the present study. Participants were presented with a robotic interface that had arms, which were teleoperated by an experimenter. The robot could point to virtual locations that represented certain entities. In Experiment 1, the experimenter described spatial locations of fictitious objects sequentially in two conditions: speech condition (SO, verbal descriptions clearly indicated the spatial layout) and speech and gesture condition (SR, verbal descriptions were ambiguous but accompanied by robotic pointing gestures). Participants were then asked to recall the objects' spatial locations. We found that the number of spatial locations recalled in the SR condition was on par with that in the SO condition, suggesting that telerobotic pointing gestures compensated ambiguous speech during the process of spatial information. In Experiment 2, the experimenter described spatial locations non-sequentially in the SR and SO conditions. Surprisingly, the number of spatial locations recalled in the SR condition was even higher than that in the SO condition, suggesting that telerobotic pointing gestures were more powerful than speech in conveying spatial information when information was presented in an unpredictable order. The findings provide evidence that human beings are able to comprehend telerobotic gestures, and importantly, integrate these gestures with co-occurring speech. This work promotes engaging remote collaboration among humans through a robot intermediary.Comment: 27 pages, 7 figure
    corecore