294 research outputs found

    TMS SMART – scalp mapping of annoyance ratings and twitches caused by Transcranial Magnetic Stimulation

    Get PDF
    Background: The magnetic pulse generated during transcranial magnetic stimulation (TMS) also stimulates cutaneous nerves and muscle fibres, with the most commonly reported side effect being muscle twitches and sometimes painful sensations. These sensations affect behaviour during experimental tasks, presenting a potential confound for ‘online’ TMS studies. New method: Our objective was to systematically map the degree of disturbance (ratings of annoyance, pain, and muscle twitches) caused by TMS at 43 locations across the scalp. Ten participants provided ratings whilst completing a choice reaction time task, and ten different participants provided ratings whilst completing a 'flanker' reaction time task. Results: TMS over frontal and inferior regions resulted in the highest ratings of annoyance, pain, and muscle twitches caused by TMS. We predicted the difference in reaction times (RT) under TMS by scalp location and subjective ratings. Frontal and inferior scalp locations showed the greatest cost to RTs under TMS (i.e., slowing), with midline sites showing no or minimal slowing. Increases in subjective ratings of disturbance predicted longer RTs under TMS. Critically, ratings were a better predictor of the cost of TMS than scalp location or scalp-to-cortex distance. The more difficult ‘flanker’ task showed a greater effect of subjective disturbance. Comparison with existing methods: We provide the data as an online resource (www.tms-smart.info) so that researchers can select control sites that account for the level of general interference in task performance caused by online single-pulse TMS. Conclusions: The peripheral sensations and discomfort caused by TMS pulses significantly and systematically influence RTs during single-pulse, online TMS experiments. The raw data are available at www.tms-smart.info and https://osf.io/f49vn

    Motion seen and understood: interactions between language comprehension and visual perception.

    Get PDF
    Embodied theories of cognition state that the body plays a central role in cognitive representation. Under this description semantic representations, which constitute the meaning of words and sentences, are simulations of real experience that directly engage sensory and motor systems. This predicts interactions between comprehension and perception at low levels, since both engage the same systems, but the majority of evidence comes from picture judgements or visuo-spatial attention therefore it is not clear which visual processes are implicated. In addition, most of the work has concentrated on sentences rather than single words although theories predict that the semantics of both should be grounded in simulation. This investigation sought to systematically explore these interactions, using verbs that refer to upwards or downwards motion and sentences derived from the same set of verbs. As well as looking at visuo-spatial attention, we employed tasks routinely used in visual psychophysics that access low levels of motion processing. In this way we were able to separate different levels of visual processing and explore whether interactions between comprehension and perception were present when low level visual processes were assessed or manipulated. The results from this investigation show that: (1) There are bilateral interactions between low level visual processes and semantic content (lexical and sentential). (2) Interactions are automatic, arising whenever linguistic and visual stimuli are presented in close temporal contiguity. (3) Interactions are subject to processes within the visual system such as perceptual learning and suppression. (4) The precise content of semantic representations dictates which visual processes are implicated in interactions. The data is best explained by a close connection between semantic representation and perceptual systems when information from both is available it is automatically integrated. However, it does not support the direct and unmediated commitment of the visual system in the semantic representation of motion events. The results suggest a complex relationship between semantic representation and sensory-motor systems that can be explained by combining task specific processes with either strong or weak embodiment
    corecore