29 research outputs found

    Strategic gaze: an interactive eye-tracking study

    Get PDF
    We present an interactive eye-tracking study that explores the strategic use of gaze. We analyze gaze behavior in an experiment with four simple games. The game can either be a competitive (hide & seek) game in which players want to be unpredictable, or a game of common interest in which players want to be predictable. Gaze is transmitted either in real time to another subject, or it is not transmitted and therefore non-strategic. We find that subjects are able to interpret non-strategic gaze, obtaining substantially higher payoffs than subjects who do not see gaze. If gaze is transmitted in real time, gaze becomes more informative in the common interest games and players predominantly succeed to coordinate on efficient outcomes. In contrast, gaze becomes less informative in the competitive game

    Neural correlates of phonetic adaptation as induced by lexical and audiovisual context

    Get PDF
    When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category boundaries, drawing on contextual information. Both lexical knowledge and lipreading cues are used in this way, but it remains unknown whether these two differing forms of perceptual learning are similar at a neural level. This study compared phoneme boundary adjustments driven by lexical or audiovisual cues, using ultra-high-field 7-T fMRI. During imaging, participants heard exposure stimuli and test stimuli. Exposure stimuli for lexical retuning were audio recordings of words, and those for audiovisual recalibration were audio–video recordings of lip movements during utterances of pseudowords. Test stimuli were ambiguous phonetic strings presented without context, and listeners reported what phoneme they heard. Reports reflected phoneme biases in preceding exposure blocks (e.g., more reported /p/ after /p/-biased exposure). Analysis of corresponding brain responses indicated that both forms of cue use were associated with a network of activity across the temporal cortex, plus parietal, insula, and motor areas. Audiovisual recalibration also elicited significant occipital cortex activity despite the lack of visual stimuli. Activity levels in several ROIs also covaried with strength of audiovisual recalibration, with greater activity accompanying larger recalibration shifts. Similar activation patterns appeared for lexical retuning, but here, no significant ROIs were identified. Audiovisual and lexical forms of perceptual learning thus induce largely similar brain response patterns. However, audiovisual recalibration involves additional visual cortex contributions, suggesting that previously acquired visual information (on lip movements) is retrieved and deployed to disambiguate auditory perception

    Sensory substitution information informs locomotor adjustments when walking through apertures

    Get PDF
    The study assessed the ability of the central nervous system (CNS) to use echoic information from sensory substitution devices (SSDs) to rotate the shoulders and safely pass through apertures of different width. Ten visually normal participants performed this task with full vision, or blindfolded using an SSD to obtain information regarding the width of an aperture created by two parallel panels. Two SSDs were tested. Participants passed through apertures of +0%, +18%, +35%, and +70% of measured body width. Kinematic indices recorded movement time, shoulder rotation, average walking velocity across the trial, peak walking velocities before crossing, after crossing and throughout a whole trial. Analyses showed participants used SSD information to regulate shoulder rotation, with greater rotation associated with narrower apertures. Rotations made using an SSD were greater compared to vision, movement times were longer, average walking velocity lower and peak velocities before crossing, after crossing and throughout the whole trial were smaller, suggesting greater caution. Collisions sometimes occurred using an SSD but not using vision, indicating that substituted information did not always result in accurate shoulder rotation judgements. No differences were found between the two SSDs. The data suggest that spatial information, provided by sensory substitution, allows the relative position of aperture panels to be internally represented, enabling the CNS to modify shoulder rotation according to aperture width. Increased buffer space indicated by greater rotations (up to approximately 35% for apertures of +18% of body width), suggests that spatial representations are not as accurate as offered by full vision

    EEG decoding of spoken words in bilingual listeners: from words to language invariant semantic-conceptual representations

    Get PDF
    Spoken word recognition and production require fast transformations between acoustic, phonological, and conceptual neural representations. Bilinguals perform these transformations in native and non-native languages, deriving unified semantic concepts from equivalent, but acoustically different words. Here we exploit this capacity of bilinguals to investigate input invariant semantic representations in the brain. We acquired EEG data while Dutch subjects, highly proficient in English listened to four monosyllabic and acoustically distinct animal words in both languages (e.g., "paard"-"horse"). Multivariate pattern analysis (MVPA) was applied to identify EEG response patterns that discriminate between individual words within one language (within-language discrimination) and generalize meaning across two languages (across-language generalization). Furthermore, employing two EEG feature selection approaches, we assessed the contribution of temporal and oscillatory EEG features to our classification results. MVPA revealed that within-language discrimination was possible in a broad time-window (~50-620 ms) after word onset probably reflecting acoustic-phonetic and semantic-conceptual differences between the words. Most interestingly, significant across-language generalization was possible around 550-600 ms, suggesting the activation of common semantic-conceptual representations from the Dutch and English nouns. Both types of classification, showed a strong contribution of oscillations below 12 Hz, indicating the importance of low frequency oscillations in the neural representation of individual words and concepts. This study demonstrates the feasibility of MVPA to decode individual spoken words from EEG responses and to assess the spectro-temporal dynamics of their language invariant semantic-conceptual representations. We discuss how this method and results could be relevant to track the neural mechanisms underlying conceptual encoding in comprehension and production

    A 7T fMRI study investigating the influence of oscillatory phase on syllable representations

    Get PDF
    Stimulus categorization is influenced by oscillations in the brain. For example, we have shown that ongoing oscillatory phase biases identification of an ambiguous syllable that can either be perceived as /da/ or /ga/. This suggests that phase is a cue for the brain to determine syllable identity and this cue could be an element of the representation of these syllables. If so, brain activation patterns for /da/ should be more unique when the syllable is presented at the /da/ biasing (i.e. its "preferred") phase. To test this hypothesis we presented non-ambiguous /da/ and /ga/ syllables at either their preferred or non-preferred phase (using sensory entrainment) while measuring 7T fMRI. Using multivariate pattern analysis in auditory regions we show that syllable decoding performance is higher when syllables are presented at their preferred compared to their non-preferred phase. These results suggest that phase information increases the distinctiveness of /da/ and /ga/ brain activation patterns

    Exploring Brain Effective Connectivity in Visual Perception Using a Hierarchical Correlation Network

    No full text
    Part 6: Constraint Programming - Brain Inspired ModelingInternational audienceBrain-inspired computing is a research hotspot in artificial intelligence (AI). One of the key problems in this field is how to find the bridge between brain connectivity and data correlation in a connection-to-cognition model. Functional magnetic resonance imaging (fMRI) signals provide rich information about brain activities. Existing modeling approaches with fMRI focus on the strength information, but neglect structural information. In a previous work, we proposed a monolayer correlation network (CorrNet) to model the structural connectivity. In this paper, we extend the monolayer CorrNet to a hierarchical correlation network (HcorrNet) by analysing visual stimuli of natural images and fMRI signals in the entire visual cortex, that is, V1, V2 V3, V4, fusiform face area (FFA), the lateral occipital complex (LOC) and parahippocampal place area (PPA). Through the HcorrNet, the efficient connectivity of the brain can be inferred layer by layer. Then, the stimulus-sensitive activity mode of voxels can be extracted, and the forward encoding process of visual perception can be modeled. Both of them can guide the decoding process of fMRI signals, including classification and image reconstruction. In the experiments, we improved a dynamic evolving spike neuron network (SNN) as the classifier, and used Generative Adversarial Networks (GANs) to reconstruct image
    corecore