14 research outputs found

    An empirically driven guide on using Bayes factors for M/EEG decoding

    Get PDF
    Bayes factors can be used to provide quantifiable evidence for contrasting hypotheses and have thus become increasingly popular in cognitive science. However, Bayes factors are rarely used to statistically assess the results of neuroimaging experiments. Here, we provide an empirically driven guide on implementing Bayes factors for time-series neural decoding results. Using real and simulated magnetoencephalography (MEG) data, we examine how parameters such as the shape of the prior and data size affect Bayes factors. Additionally, we discuss the benefits Bayes factors bring to analysing multivariate pattern analysis data and show how using Bayes factors can be used instead or in addition to traditional frequentist approaches

    Temporal dissociation of neural activity underlying synesthetic and perceptual colors

    Get PDF
    Grapheme-color synesthetes experience color when seeing achromatic symbols. We examined whether similar neural mechanisms underlie color perception and synesthetic colors using magnetoencephalography. Classification models trained on neural activity from viewing colored stimuli could distinguish synesthetic color evoked by achromatic symbols after a delay of ∼100 ms. Our results provide an objective neural signature for synesthetic experience and temporal evidence consistent with higher-level processing in synesthesia

    The representational dynamics of food in the human brain

    No full text

    Vicarious touch: Overlapping neural patterns between seeing and feeling touch

    No full text
    Simulation theories propose that vicarious touch arises when seeing someone else being touched triggers corresponding representations of being touched. Prior electroencephalography (EEG) findings show that seeing touch modulates both early and late somatosensory responses (measured with or without direct tactile stimulation). Functional Magnetic Resonance Imaging (fMRI) studies have shown that seeing touch increases somatosensory cortical activation. These findings have been taken to suggest that when we see someone being touched, we simulate that touch in our sensory systems. The somatosensory overlap when seeing and feeling touch differs between individuals, potentially underpinning variation in vicarious touch experiences. Increases in amplitude (EEG) or cerebral blood flow response (fMRI), however, are limited in that they cannot test for the information contained in the neural signal: seeing touch may not activate the same information as feeling touch. Here, we use time-resolved multivariate pattern analysis on whole-brain EEG data from people with and without vicarious touch experiences to test whether seen touch evokes overlapping neural representations with the first-hand experience of touch. Participants felt touch to the fingers (tactile trials) or watched carefully matched videos of touch to another person's fingers (visual trials). In both groups, EEG was sufficiently sensitive to allow decoding of touch location (little finger vs. thumb) on tactile trials. However, only in individuals who reported feeling touch when watching videos of touch could a classifier trained on tactile trials distinguish touch location on visual trials. This demonstrates that, for people who experience vicarious touch, there is overlap in the information about touch location held in the neural patterns when seeing and feeling touch. The timecourse of this overlap implies that seeing touch evokes similar representations to later stages of tactile processing. Therefore, while simulation may underlie vicarious tactile sensations, our findings suggest this involves an abstracted representation of directly felt touch

    Static versus dynamic medical images : the role of cue utilization in diagnostic performance

    No full text
    Echocardiographers can detect abnormalities accurately and rapidly from dynamic images. This is likely due to the application of cue-based associations resident in memory, a process known as cue utilization. This study investigated whether cue utilization is associated with the ability to apply within-domain capabilities (dynamic) to more degraded images (static). Fifty-eight echocardiographers completed the echocardiography edition of the Expert Intensive Skills Evaluation 2.0 (EXPERTise 2.0) to establish behavioral indicators of within-domain cue utilization. They also completed an abnormality detection and categorization task that comprised briefly presented static and moving images (50% abnormal). Behaviors consistent with higher cue utilization were associated with greater accuracy in detecting both static and dynamic images but not for categorization. This study provides important information about how experts who have the capacity to utilize cue-based strategies can rapidly and accurately detect abnormalities from domain-specific stimuli and generalize their skills to more challenging stimuli

    Rotation-tolerant representations elucidate the time course of high-level object processing

    No full text
    Design and analysis plans, experiment code and stimuli, analysis scripts, results, & figures for 'Rotation-tolerant representations elucidate the time course of high-level object processing'

    Rotation-tolerant representations elucidate the time-course of high-level object processing

    No full text
    Humans have little difficulty recognising visual objects in many circumstances, despite the very different retinal images that result from different viewpoints. One source of variability is 2-D rotation, where an object seen from different perspectives results in different orientations. Here, we studied how the brain transforms rotated object images into object representations that are tolerant to rotation. We measured time-varying electroencephalography responses to objects in eight orientations, presented at either 5 Hz or 20 Hz. We used multivariate classification to assess at what point in time rotation-tolerant object information emerged, and whether we could disrupt the rotation-tolerant object processing by presenting stimuli rapidly (20 Hz) to limit the depth of processing. We compared this to fixed-rotation measures of object decoding, where the classifier is trained and tested on the same orientation. Our results showed that both fixed-rotation and rotation-tolerant object decoding emerged at an early stage of processing, less than 100 ms after stimulus onset. However, rotation-tolerant information peaked later than fixed-rotation information, suggesting rotation-tolerant object representations are most robust during a late stage of processing, around 200 ms after stimulus onset. Both fixed-rotation and rotation-tolerant object information was lower for the 20 Hz compared to 5 Hz presentation rate, which suggests that object information processing is disrupted, but not eliminated, for fast presentation rates. Our results show that object information arises at similar times in the brain regardless of whether it is investigated with the fixed-rotation or rotation-tolerant object decoding method, but it is the later stage of processing that reconciles different viewpoints into a single rotation-tolerant representation

    The time-course of feature-based attention effects dissociated from temporal expectation and target-related processes

    No full text
    The aim of this study was to investigate the dynamic effect of attention on visual processing, while 1) controlling for target-related confounds, and 2) directly investigating the influence of temporal expectation. We used using multivariate pattern analysis of EEG data

    The time-course of feature-based attention effects dissociated from temporal expectation and target-related processes.

    Get PDF
    Selective attention prioritises relevant information amongst competing sensory input. Time-resolved electrophysiological studies have shown stronger representation of attended compared to unattended stimuli, which has been interpreted as an effect of attention on information coding. However, because attention is often manipulated by making only the attended stimulus a target to be remembered and/or responded to, many reported attention effects have been confounded with target-related processes such as visual short-term memory or decision-making. In addition, attention effects could be influenced by temporal expectation about when something is likely to happen. The aim of this study was to investigate the dynamic effect of attention on visual processing using multivariate pattern analysis of electroencephalography (EEG) data, while (1) controlling for target-related confounds, and (2) directly investigating the influence of temporal expectation. Participants viewed rapid sequences of overlaid oriented grating pairs while detecting a "target" grating of a particular orientation. We manipulated attention, one grating was attended and the other ignored (cued by colour), and temporal expectation, with stimulus onset timing either predictable or not. We controlled for target-related processing confounds by only analysing non-target trials. Both attended and ignored gratings were initially coded equally in the pattern of responses across EEG sensors. An effect of attention, with preferential coding of the attended stimulus, emerged approximately 230 ms after stimulus onset. This attention effect occurred even when controlling for target-related processing confounds, and regardless of stimulus onset expectation. These results provide insight into the effect of feature-based attention on the dynamic processing of competing visual information
    corecore