46 research outputs found

    A eukaryotic specific transmembrane segment is required for tetramerization in AMPA receptors

    Get PDF
    Most fast excitatory synaptic transmission in the nervous system is mediated by glutamate acting through ionotropic glutamate receptors (iGluRs). iGluRs (AMPA, kainate, and NMDA receptor subtypes) are tetrameric assemblies, formed as a dimer of dimers. Still, the mechanism underlying tetramerization-the necessary step for the formation of functional receptors that can be inserted into the plasma membrane-is unknown. All eukaryotic compared to prokaryotic iGluR subunits have an additional transmembrane segment, theM4segment, which positions the physiologically critical C-terminal domain on the cytoplasmic side of the membrane.AMPAreceptor (AMPAR) subunits lacking M4 do not express on the plasma membrane. Here, we show that these constructs are retained in the endoplasmic reticulum, the major cellular compartment mediating protein oligomerization. Using approaches to assay the native oligomeric state of AMPAR subunits, we find that subunits lacking M4 or containing single amino acid substitutions along an "interacting" face of the M4 helix that block surface expression no longer tetramerize in either homomeric or heteromeric assemblies. In contrast, subunit dimerization appears to be largely intact. These experiments define the M4 segment as a unique functional unit in AMPARs that is required for the critical dimer-to-tetramer transition. © 2013 the authors

    Integrating Mechanisms of Visual Guidance in Naturalistic Language Production

    Get PDF
    Situated language production requires the integration of visual attention and lin-guistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate percep-tual (scene clutter) and conceptual guidance (cue animacy), and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of lan-guage production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of atten-tional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan-pattern complexity. Furthermore, the eye-voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention

    The interaction of visual and linguistic saliency during syntactic ambiguity resolution

    Get PDF
    Psycholinguistic research using the visual world paradigm has shown that the pro-cessing of sentences is constrained by the visual context in which they occur. Re-cently, there has been growing interest on the interactions observed when both lan-guage and vision provide relevant information during sentence processing. In three visual world experiments on syntactic ambiguity resolution, we investigate how vi-sual and linguistic information influence the interpretation of ambiguous sentences. We hypothesize that (1) visual and linguistic information both constrain which in-terpretation is pursued by the sentence processor, and (2) the two types of informa-tion act upon the interpretation of the sentence at different points during processing. In Experiment 1, we show that visual saliency is utilized to anticipate the upcoming arguments of a verb. In Experiment 2, we operationalize linguistic saliency using intonational breaks and demonstrate that these give prominence to linguistic refer-ents. These results confirm prediction (1). In Experiment 3, we manipulate visual and linguistic saliency together and find that both types of information are used, but at different points in the sentence, to incrementally update its current interpre-tation. This finding is consistent with prediction (2). Overall, our results suggest an adaptive processing architecture in which different types of information are used when they become available, optimizing different aspects of situated language pro-cessing

    Early use of phonetic information in spoken word recognition: Lexical stress drives eye movements immediately

    No full text
    Contains fulltext : 90217.pdf (publisher's version ) (Closed access)For optimal word recognition listeners should use all relevant acoustic information as soon as it comes available. Using printed-word eye tracking we investigated when during word processing Dutch listeners use suprasegmental lexical stress information to recognize words. Fixations on targets such as “OCtopus” (capitals indicate stress) were more frequent than fixations on segmentally overlapping but differently stressed competitors (“okTOber”) before segmental information could disambiguate the words. Furthermore, prior to segmental disambiguation, initially stressed words were stronger lexical competitors than noninitially stressed words. Listeners recognize words by immediately using all relevant information in the speech signal.12 p

    Tracking recognition of spoken words by tracking looks to printed words

    No full text
    Contains fulltext : 128193.pdf (publisher's version ) (Closed access)Eye movements of Dutch participants were tracked as they looked at arrays of four words on a computer screen and followed spoken instructions (e.g., "Klik op her woord buffel": Click on the word buffalo). The arrays included the target (e.g., buffel), a phonological competitor (e.g., buffer, buffer), and two unrelated distractors. Targets were monosyllabic or bisyllabic, and competitors mismatched targets only on either their onset or offset phoneme and only by one distinctive feature. Participants looked at competitors more than at distractors, but this effect was much stronger for offset-mismatch than onset-mismatch competitors. Fixations to competitors started to decrease as soon as phonetic evidence disfavouring those competitors could influence behaviour. These results confirm that listeners continuously update their interpretation of words as the evidence in the speech signal unfolds and hence establish the viability of the methodology of using eye movements to arrays of printed words to track spoken-word recognition.11 p

    Comparison of the BOD POD with the four-compartment model in adult females

    No full text
    Three eye-tracking experiments investigated the impact of the complexity of the visual environment on the likelihood of word-object mapping taking place at phonological, semantic and visual levels of representation during language-mediated visual search. Dutch participants heard spoken target words while looking at four objects embedded in displays of different complexity and indicated the presence or absence of the target object. During filler trials the target objects were present, but during experimental trials they were absent and the display contained various competitor objects. For example, given the target word “beaker”, the display contained a phonological (a beaver, bever), a shape (a bobbin, klos), a semantic (a fork, vork) competitor, and an unrelated distractor (an umbrella, paraplu). When objects were presented in simple four-object displays (Experiment 2), there were clear attentional biases to all three types of competitors replicating earlier research (Huettig and McQueen, 2007). When the objects were embedded in complex scenes including four human-like characters or four meaningless visual shapes (Experiments 1, 3), there were biases in looks to visual and semantic but not to phonological competitors. In both experiments, however, we observed evidence for inhibition in looks to phonological competitors, which suggests that the phonological forms of the objects nevertheless had been retrieved. These findings suggest that phonological word-object mapping is contingent upon the nature of the visual environment and add to a growing body of evidence that the nature of our visual surroundings induces particular modes of processing during language-mediated visual search
    corecore