13,916 research outputs found

    Acoustic cues for the korean stop contrast-dialectal variation

    Get PDF
    In this study, cross-dialectal variation in the use of the acoustic cues of VOT and F0 to mark the laryngeal contrast in Korean stops is examined with Chonnam Korean and Seoul Korean. Prior experimental results (Han & Weitzman, 1970; Hardcastle, 1973; Jun, 1993 &1998; Kim, C., 1965) show that pitch values in the vowel onset following the target stop consonants play a supplementary role to VOT in designating the three contrastive laryngeal categories. F0 contours are determined in part by the intonational system of a language, which raises the question of how the intonational system interacts with phonological contrasts. Intonational difference might be linked to dissimilar patterns in using the complementary acoustic cues of VOT and F0. This hypothesis is tested with 6 Korean speakers, three Seoul Korean and three Chonnam Korean speakers. The results show that Chonnam Korean involves more 3-way VOT and a 2-way distinction in F0 distribution in comparison to Seoul Korean that shows more 3-way F0 distribution and a 2-way VOT distinction. The two acoustic cues are complementary in that one cue is rather faithful in marking 3-way contrast, while the other cue marks the contrast less distinctively. It also seems that these variations are not completely arbitrary, but linked to the phonological characteristics in dialects. Chonnam Korean, in which the initial tonal realization in the accentual phrase is expected to be more salient, tends to minimize the F0 perturbation effect from the preceding consonants by taking more overlaps in F0 distribution. And a 3-way distribution of VOT in Chonnam Korean, as compensation, can be also understood as a durational sensitivity. Without these characteristics, Seoul Korean shows relatively more overlapping distribution in VOT and more 3-way separation in F0 distribution

    Production and perception of English Word Final Stops By Malay Speakers

    Get PDF
    A few influential speech studies have been carried out using established speech learning models, which confirmed that the analysis of first language (L1) and second language (L2) at a phonemic level provides only a partial view of deeper relationships between languages in contact. Therefore, studies focusing on cross-language phonetic differences as a causative factor in L2 learner difficulties have been proposed to understand second language learners’ (L2) speech production and how listeners respond perceptually to the phonetic properties of L2. This paper presents a study of the production and perception of the final stops by English learners (L2) whose first language is Malay (L1). A total of 23 students, comprising 16 male and 7 female Malay subjects (L1 as Malay and their L2 as English) with normal hearing and speech development participated in this study. A short interview was conducted in order to gain background information about information about each subject, to introduce them to the study, to inform them about the process of recording, the materials to be used in the recording session, and how the materials should be managed during recording time. Acoustic measurements of selected segments occurring in word final positions (via spectrographic analysis, syllable rhyme duration and phonation) were taken. Results of the voicing contrast realisation in Malay accented English and Malaysian listeners' perceptual identification/discrimination abilities with final voiced/voiceless stops in Malay and English are presented and discussed. The findings revealed that the Malay students’ realisation of final stops in L2 is largely identical to their L1. In addition, the results also showed that accurate ‘perception’ may not always lead to accurate ‘production’

    Language identification with suprasegmental cues: A study based on speech resynthesis

    Get PDF
    This paper proposes a new experimental paradigm to explore the discriminability of languages, a question which is crucial to the child born in a bilingual environment. This paradigm employs the speech resynthesis technique, enabling the experimenter to preserve or degrade acoustic cues such as phonotactics, syllabic rhythm or intonation from natural utterances. English and Japanese sentences were resynthesized, preserving broad phonotactics, rhythm and intonation (Condition 1), rhythm and intonation (Condition 2), intonation only (Condition 3), or rhythm only (Condition 4). The findings support the notion that syllabic rhythm is a necessary and sufficient cue for French adult subjects to discriminate English from Japanese sentences. The results are consistent with previous research using low-pass filtered speech, as well as with phonological theories predicting rhythmic differences between languages. Thus, the new methodology proposed appears to be well-suited to study language discrimination. Applications for other domains of psycholinguistic research and for automatic language identification are considered

    Lexical Effects in Perception of Tamil Geminates

    Get PDF
    Lexical status effects are a phenomenon in which listeners use their prior lexical knowledge of a language to identify ambiguous speech sounds in a word based on its word or nonword status. This phenomenon has been demonstrated for ambiguous initial English consonants (one example being the Ganong Effect, a phenomenon in which listeners perceive an ambiguous speech sound as a phoneme that would complete a real word rather than a nonsense word) as a supporting factor for top-down lexical processing affecting listeners' subsequent acoustic judgement, but not for ambiguous mid-word consonants in non-English languages. In this experiment, we attempt to look at ambiguous mid-word consonants with Tamil, a South Asian language in order to see if the same top-down lexical effect was applicable outside of English. These Tamil consonants can present as either singletons (single speech sounds) or geminates (doubled speech sounds).We hypothesized that by creating ambiguous stimuli between a geminate word kuppam and a singleton non-word like kubam, participants would be more likely to perceive the ambiguous sound as a phoneme that completes the real word rather than the nonword (in this case, perceiving the ambiguous sound as a /p/ for kuppam instead of kubam). Participants listened to the ambiguous stimuli in two separate sets of continua (kuppam/suppam and nakkam/pakkam) and then indicated which word they heard in a four-alternative forced choice word identification task. Results showed that participants identified the ambiguous sounds as the sound that completed the actual word, but only for one set of continua (kuppam/suppam). These data suggest that there may be strong top-down lexical effects for ambiguous sounds in certain stimuli in Tamil, but not others.No embargoAcademic Major: LinguisticsAcademic Major: Psycholog

    Asymmetric discrimination of non-speech tonal analogues of vowels

    Full text link
    Published in final edited form as: J Exp Psychol Hum Percept Perform. 2019 February ; 45(2): 285–300. doi:10.1037/xhp0000603.Directional asymmetries reveal a universal bias in vowel perception favoring extreme vocalic articulations, which lead to acoustic vowel signals with dynamic formant trajectories and well-defined spectral prominences due to the convergence of adjacent formants. The present experiments investigated whether this bias reflects speech-specific processes or general properties of spectral processing in the auditory system. Toward this end, we examined whether analogous asymmetries in perception arise with non-speech tonal analogues that approximate some of the dynamic and static spectral characteristics of naturally-produced /u/ vowels executed with more versus less extreme lip gestures. We found a qualitatively similar but weaker directional effect with two-component tones varying in both the dynamic changes and proximity of their spectral energies. In subsequent experiments, we pinned down the phenomenon using tones that varied in one or both of these two acoustic characteristics. We found comparable asymmetries with tones that differed exclusively in their spectral dynamics, and no asymmetries with tones that differed exclusively in their spectral proximity or both spectral features. We interpret these findings as evidence that dynamic spectral changes are a critical cue for eliciting asymmetries in non-speech tone perception, but that the potential contribution of general auditory processes to asymmetries in vowel perception is limited.Accepted manuscrip

    ARSTREAM: A Neural Network Model of Auditory Scene Analysis and Source Segregation

    Full text link
    Multiple sound sources often contain harmonics that overlap and may be degraded by environmental noise. The auditory system is capable of teasing apart these sources into distinct mental objects, or streams. Such an "auditory scene analysis" enables the brain to solve the cocktail party problem. A neural network model of auditory scene analysis, called the AIRSTREAM model, is presented to propose how the brain accomplishes this feat. The model clarifies how the frequency components that correspond to a give acoustic source may be coherently grouped together into distinct streams based on pitch and spatial cues. The model also clarifies how multiple streams may be distinguishes and seperated by the brain. Streams are formed as spectral-pitch resonances that emerge through feedback interactions between frequency-specific spectral representaion of a sound source and its pitch. First, the model transforms a sound into a spatial pattern of frequency-specific activation across a spectral stream layer. The sound has multiple parallel representations at this layer. A sound's spectral representation activates a bottom-up filter that is sensitive to harmonics of the sound's pitch. The filter activates a pitch category which, in turn, activate a top-down expectation that allows one voice or instrument to be tracked through a noisy multiple source environment. Spectral components are suppressed if they do not match harmonics of the top-down expectation that is read-out by the selected pitch, thereby allowing another stream to capture these components, as in the "old-plus-new-heuristic" of Bregman. Multiple simultaneously occuring spectral-pitch resonances can hereby emerge. These resonance and matching mechanisms are specialized versions of Adaptive Resonance Theory, or ART, which clarifies how pitch representations can self-organize durin learning of harmonic bottom-up filters and top-down expectations. The model also clarifies how spatial location cues can help to disambiguate two sources with similar spectral cures. Data are simulated from psychophysical grouping experiments, such as how a tone sweeping upwards in frequency creates a bounce percept by grouping with a downward sweeping tone due to proximity in frequency, even if noise replaces the tones at their interection point. Illusory auditory percepts are also simulated, such as the auditory continuity illusion of a tone continuing through a noise burst even if the tone is not present during the noise, and the scale illusion of Deutsch whereby downward and upward scales presented alternately to the two ears are regrouped based on frequency proximity, leading to a bounce percept. Since related sorts of resonances have been used to quantitatively simulate psychophysical data about speech perception, the model strengthens the hypothesis the ART-like mechanisms are used at multiple levels of the auditory system. Proposals for developing the model to explain more complex streaming data are also provided.Air Force Office of Scientific Research (F49620-01-1-0397, F49620-92-J-0225); Office of Naval Research (N00014-01-1-0624); Advanced Research Projects Agency (N00014-92-J-4015); British Petroleum (89A-1204); National Science Foundation (IRI-90-00530); American Society of Engineering Educatio

    The Resonant Dynamics of Speech Perception: Interword Integration and Duration-Dependent Backward Effects

    Full text link
    How do listeners integrate temporally distributed phonemic information into coherent representations of syllables and words? During fluent speech perception, variations in the durations of speech sounds and silent pauses can produce different pereeived groupings. For exarnple, increasing the silence interval between the words "gray chip" may result in the percept "great chip", whereas increasing the duration of fricative noise in "chip" may alter the percept to "great ship" (Repp et al., 1978). The ARTWORD neural model quantitatively simulates such context-sensitive speech data. In AHTWORD, sequential activation and storage of phonemic items in working memory provides bottom-up input to unitized representations, or list chunks, that group together sequences of items of variable length. The list chunks compete with each other as they dynamically integrate this bottom-up information. The winning groupings feed back to provide top-down supportto their phonemic items. Feedback establishes a resonance which temporarily boosts the activation levels of selected items and chunks, thereby creating an emergent conscious percept. Because the resonance evolves more slowly than wotking memory activation, it can be influenced by information presented after relatively long intervening silence intervals. The same phonemic input can hereby yield different groupings depending on its arrival time. Processes of resonant transfer and competitive teaming help determine which groupings win the competition. Habituating levels of neurotransmitter along the pathways that sustain the resonant feedback lead to a resonant collapsee that permits the formation of subsequent. resonances.Air Force Office of Scientific Research (F49620-92-J-0225); Defense Advanced Research projects Agency and Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-92-J-1309, NOOO14-95-1-0657

    Engaging the articulators enhances perception of concordant visible speech movements

    Full text link
    PURPOSE This study aimed to test whether (and how) somatosensory feedback signals from the vocal tract affect concurrent unimodal visual speech perception. METHOD Participants discriminated pairs of silent visual utterances of vowels under 3 experimental conditions: (a) normal (baseline) and while holding either (b) a bite block or (c) a lip tube in their mouths. To test the specificity of somatosensory-visual interactions during perception, we assessed discrimination of vowel contrasts optically distinguished based on their mandibular (English /ɛ/-/æ/) or labial (English /u/-French /u/) postures. In addition, we assessed perception of each contrast using dynamically articulating videos and static (single-frame) images of each gesture (at vowel midpoint). RESULTS Engaging the jaw selectively facilitated perception of the dynamic gestures optically distinct in terms of jaw height, whereas engaging the lips selectively facilitated perception of the dynamic gestures optically distinct in terms of their degree of lip compression and protrusion. Thus, participants perceived visible speech movements in relation to the configuration and shape of their own vocal tract (and possibly their ability to produce covert vowel production-like movements). In contrast, engaging the articulators had no effect when the speaking faces did not move, suggesting that the somatosensory inputs affected perception of time-varying kinematic information rather than changes in target (movement end point) mouth shapes. CONCLUSIONS These findings suggest that orofacial somatosensory inputs associated with speech production prime premotor and somatosensory brain regions involved in the sensorimotor control of speech, thereby facilitating perception of concordant visible speech movements. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.9911846R01 DC002852 - NIDCD NIH HHSAccepted manuscrip
    • …
    corecore