6 research outputs found

    Master of Arts

    Get PDF
    thesisThere has long been an assumption of a bilateral divide of languages into nonclassifier languages and classifier languages, i.e., those languages that do not use numeral classifiers when counting nouns and those that do use numeral classifiers. While this assumption holds for most languages, it cannot account for languages such as Armenian, which optionally allow for numeral classifiers to appear when nouns are counted, and also cannot account for languages such as Paiwan, a so-called poor-classifier language, which has numeral classifiers for certain noun classes and not for others. I propose that by adopting Borer's method of dealing with numeral classifiers and plural morphology as two expressions of the same underlying phenomenon, and by spelling out a specific syntactic mechanism for achieving the process outlined by Borer, we find a theory that is able to account for the full range of possibilities (i.e., nonclassifier, classifier, classifier-optional, and poor-classifier languages)

    Task description and performance.

    No full text
    <p><b>a,</b> The task consisted of a video of a mouth pronouncing one of four phonemes. This video was randomly paired with audio of a male pronouncing one of the same four syllables. The video times here are shown in the text below the timeline. There was one second of video before the audio began, during which the mouth moved slightly in order to position to speak the starting phoneme. The audio syllable lasted half a second, and there was one second of video after the audio had finished. After a brief randomized delay, the subject was cued to respond. The patient had five seconds to respond before a new trial was initiated. <b>b,</b> Task performance for three conditions. Patients performed significantly better on Matched A/V (73.81%, N = 186) trials and Unmatched A/V (78.77%, N = 299) trials when compared with McGurk (18.29%, N = 152) trials (ANOVA, Tukey-Kramer method for multiple comparisons, p<0.01 for both comparisons).</p

    Seeing Is Believing: Neural Representations of Visual Stimuli in Human Auditory Cortex Correlate with Illusory Auditory Perceptions

    Get PDF
    <div><p>In interpersonal communication, the listener can often see as well as hear the speaker. Visual stimuli can subtly change a listener’s auditory perception, as in the McGurk illusion, in which perception of a phoneme’s auditory identity is changed by a concurrent video of a mouth articulating a different phoneme. Studies have yet to link visual influences on the neural representation of language with subjective language perception. Here we show that vision influences the electrophysiological representation of phonemes in human auditory cortex prior to the presentation of the auditory stimulus. We used the McGurk effect to dissociate the subjective perception of phonemes from the auditory stimuli. With this paradigm we demonstrate that neural representations in auditory cortex are more closely correlated with the visual stimuli of mouth articulation, which drive the illusory subjective auditory perception, than the actual auditory stimuli. Additionally, information about visual and auditory stimuli transfer in the caudal–rostral direction along the superior temporal gyrus during phoneme perception as would be expected of visual information flowing from the occipital cortex into the ventral auditory processing stream. These results show that visual stimuli influence the neural representation in auditory cortex early in sensory processing and may override the subjective auditory perceptions normally generated by auditory stimuli. These findings depict a marked influence of vision on the neural processing of audition in tertiary auditory cortex and suggest a mechanistic underpinning for the McGurk effect.</p></div

    Transfer information in the superior temporal lobe.

    No full text
    <p><b>a,</b> Electrode locations from Patient 1 for electrode location and information transfer directionality reference. <b>b,</b> An averaged evoked potential from the middle electrode is shown above the plots for reference. The scale bar for the evoked potential is 400 µV. <b>c,</b> Plot of information transfer between the posterior electrode and the electrode proximal to AI for 3-second time periods through the duration of the trial. <b>d,</b> Plot of information transfer between the electrode proximal to AI and the anterior electrode for 3-second time periods through the duration of the trial. For both <b>b</b> and <b>c</b>, positive values indicate transfer of information in the caudal–rostral direction, and negative values indicate transfer of information in the rostral–caudal direction. Green box plots indicate information about the identity of the audio stimuli. Blue plots indicate information about the video stimuli. Box plots show means and quartiles for the 5 hemispheres.</p

    Visual representations in parabelt auditory cortex.

    No full text
    <p><b>a,</b> Example spectrograms for the McGurk condition (“VA” &/BA/). Spectrograms were normalized by frequency band. White dotted lines indicate the start of the video. Black dotted lines indicate the start of the audio. <b>b,</b> Example difference spectrograms for all three electrodes from one patient. Matched A/V spectrograms were subtracted from McGurk spectrograms (“VA” &/BA/− “VA” &/VA/and “VA” &/BA/− “BA” &/BA/) between −1 and 1 seconds relative to auditory stimulus onset (between black dashed lines in spectrograms). McGurk spectrograms were significantly less different from Matched A/V spectrograms with the same video identity than Matched A/V spectrograms with the same audio identity, as shown in the bar graph to the left of the difference spectrograms. Electrode locations are color coded and labeled (A, anterior electrode; AI, electrode proximal to AI; P, posterior electrode). <b>c,</b> A statistical classifier accurately classified McGurk trials when tested on the identity of the video (74.33%); however, the classifier consistently chose the wrong auditory identity for McGurk trials (36.17%). The dashed line represents chance level classification (50.00%).</p
    corecore