1,237 research outputs found

    The weight of phonetic substance in the structure of sound inventories

    Get PDF
    In the research field initiated by Lindblom & Liljencrants in 1972, we illustrate the possibility of giving substance to phonology, predicting the structure of phonological systems with nonphonological principles, be they listener-oriented (perceptual contrast and stability) or speaker-oriented (articulatory contrast and economy). We proposed for vowel systems the Dispersion-Focalisation Theory (Schwartz et al., 1997b). With the DFT, we can predict vowel systems using two competing perceptual constraints weighted with two parameters, respectively λ and α. The first one aims at increasing auditory distances between vowel spectra (dispersion), the second one aims at increasing the perceptual salience of each spectrum through formant proximities (focalisation). We also introduced new variants based on research in physics - namely, phase space (λ,α) and polymorphism of a given phase, or superstructures in phonological organisations (Vallée et al., 1999) which allow us to generate 85.6% of 342 UPSID systems from 3- to 7-vowel qualities. No similar theory for consonants seems to exist yet. Therefore we present in detail a typology of consonants, and then suggest ways to explain plosive vs. fricative and voiceless vs. voiced consonants predominances by i) comparing them with language acquisition data at the babbling stage and looking at the capacity to acquire relatively different linguistic systems in relation with the main degrees of freedom of the articulators; ii) showing that the places “preferred” for each manner are at least partly conditioned by the morphological constraints that facilitate or complicate, make possible or impossible the needed articulatory gestures, e.g. the complexity of the articulatory control for voicing and the aerodynamics of fricatives. A rather strict coordination between the glottis and the oral constriction is needed to produce acceptable voiced fricatives (Mawass et al., 2000). We determine that the region where the combinations of Ag (glottal area) and Ac (constriction area) values results in a balance between the voice and noise components is indeed very narrow. We thus demonstrate that some of the main tendencies in the phonological vowel and consonant structures of the world’s languages can be explained partly by sensorimotor constraints, and argue that actually phonology can take part in a theory of Perception-for-Action-Control

    Effects of Palatal Expansion on Speech Production

    Get PDF
    Introduction: Rapid palatal expanders (RPEs) are a commonly used orthodontic adjunct for the treatment of posterior crossbites. RPEs are cemented to bilateral posterior teeth across the palate and thus may interfere with proper tongue movement and linguopalatal contact. The purpose of this study was to identify what specific role RPEs have on speech sound production for the child and early adolescent orthodontic patient. Materials and Methods: RPEs were treatment planned for patients seeking orthodontics at Marquette University. Speech recordings were made using a phonetically balanced reading passage (“The Caterpillar”) at 3 time points: 1) before RPE placement; 2) immediately after cementation; and 3) 10-14 days post appliance delivery. Measures of vocal tract resonance (formant center frequencies) were obtained for vowels and measures of noise distribution (spectral moments) were obtained for consonants. Two-way repeated measures (ANOVA) was used along with post-hoc tests for statistical analysis. Results: For the vowel /i/, the first formant increased and the second formant decreased indicating a more inferior and posterior tongue position. For /e/, only the second formant decreased resulting in a more posterior tongue position. The formants did not return to baseline within the two-week study period. For the fricatives /s/, //, /t/, and /k/, a significant shift from high to low frequencies indicated distortion upon appliance placement. Of these, only /t/ fully returned to baseline during the study period. Conclusion: Numerous phonemes were distorted upon RPE placement which indicated altered speech sound production. For most phonemes, it takes longer than two weeks for speech to return to baseline, if at all. Clinically, the results of this study will help with pre-treatment and interdisciplinary counseling for orthodontic patients receiving palatal expanders

    An Experimental Evaluation of Stop-Plosive and Fricative Consonant Intelligibility by Tracheoesophageal Speakers in Quiet and Noise

    Get PDF
    Despite functional levels of postlaryngectomy communication, individuals who undergo total laryngectomy and tracheoesophageal (TE) puncture voice restoration continue to experience significant communication difficulties in noisy environments. In an effort to identify and further characterize TE speakers’ intelligibility in noise, the current auditory perceptual study investigated stop-plosive and fricative intelligibility of TE speech in quiet and in the presence of multi-talker noise. Eighteen listeners evaluated monosyllabic consonant-vowel-consonant words produced by 14 TE speakers using an open-response paradigm. Our findings indicate that overall intelligibility was significantly lower in noise. Further examination showed a differential effect of noise on intelligibility according to manner and phoneme position. While overall error patterns remained consistent across conditions, voicing distinction was affected differentially according to manner and position. The present investigation provides valuable insight into the difficulties faced by TE speakers in noisy speaking environments, as well as a basis for optimization of counseling and postlaryngectomy voice rehabilitation

    Classification of Arabic fricative consonants according to their places of articulation

    Get PDF
    Many technology systems have used voice recognition applications to transcribe a speaker’s speech into text that can be used by these systems. One of the most complex tasks in speech identification is to know, which acoustic cues will be used to classify sounds. This study presents an approach for characterizing Arabic fricative consonants in two groups (sibilant and non-sibilant). From an acoustic point of view, our approach is based on the analysis of the energy distribution, in frequency bands, in a syllable of the consonant-vowel type. From a practical point of view, our technique has been implemented, in the MATLAB software, and tested on a corpus built in our laboratory. The results obtained show that the percentage energy distribution in a speech signal is a very powerful parameter in the classification of Arabic fricatives. We obtained an accuracy of 92% for non-sibilant consonants /f, χ, ɣ, ʕ, ћ, and h/, 84% for sibilants /s, sҁ, z, Ӡ and ∫/, and 89% for the whole classification rate. In comparison to other algorithms based on neural networks and support vector machines (SVM), our classification system was able to provide a higher classification rate

    Contributions of temporal encodings of voicing, voicelessness, fundamental frequency, and amplitude variation to audiovisual and auditory speech perception

    Get PDF
    Auditory and audio-visual speech perception was investigated using auditory signals of invariant spectral envelope that temporally encoded the presence of voiced and voiceless excitation, variations in amplitude envelope and F-0. In experiment 1, the contribution of the timing of voicing was compared in consonant identification to the additional effects of variations in F-0 and the amplitude of voiced speech. In audio-visual conditions only, amplitude variation slightly increased accuracy globally and for manner features. F-0 variation slightly increased overall accuracy and manner perception in auditory and audio-visual conditions. Experiment 2 examined consonant information derived from the presence and amplitude variation of voiceless speech in addition to that from voicing, F-0, and voiced speech amplitude. Binary indication of voiceless excitation improved accuracy overall and for voicing and manner. The amplitude variation of voiceless speech produced only a small increment in place of articulation scores. A final experiment examined audio-visual sentence perception using encodings of voiceless excitation and amplitude variation added to a signal representing voicing and F-0. There was a contribution of amplitude variation to sentence perception, but not of voiceless excitation. The timing of voiced and voiceless excitation appears to be the major temporal cues to consonant identity. (C) 1999 Acoustical Society of America. [S0001-4966(99)01410-1]

    Coalescent Assimilation Across Wordboundaries in American English and in Polish English

    Get PDF
    Coalescent assimilation (CA), where alveolar obstruents /t, d, s, z/ in word-final position merge with word-initial /j/ to produce postalveolar /tʃ, dʒ, ʃ, ʒ/, is one of the most wellknown connected speech processes in English. Due to its commonness, CA has been discussed in numerous textbook descriptions of English pronunciation, and yet, upon comparing them it is difficult to get a clear picture of what factors make its application likely. This paper aims to investigate the application of CA in American English to see a) what factors increase the likelihood of its application for each of the four alveolar obstruents, and b) what is the allophonic realization of plosives /t, d/ if the CA does not apply. To do so, the Buckeye Corpus (Pitt et al. 2007) of spoken American English is analyzed quantitatively. As a second step, these results are compared with Polish English; statistics analogous to the ones listed above for American English are gathered for Polish English based on the PLEC corpus (Pęzik 2012). The last section focuses on what consequences for teaching based on a native speaker model the findings have. It is argued that a description of the phenomenon that reflects the behavior of speakers of American English more accurately than extant textbook accounts could be beneficial to the acquisition of these patterns

    Cross-linguistic exploration of phonemic representations

    Get PDF
    All languages around the world have their own vast sound inventories. Understanding each other through verbal communication requires, first of all, understanding each other\u2019s phonemes. This often overlooked constraint is non-trivial already among native speakers of the same language, given the variability with which we all articulate our phonemes. It becomes even more challenging when interacting with non-native speakers, who have developed neural representations of different sets of phonemes. How can the brain make sense of such diversity? It is remarkable that the sounds produced by the vocal tract, that have evolved to serve as sym-bols in natural languages, fall almost neatly into two classes with such different characteristics, consonants and vowels. Consonants are complex in nature: beyond acoustically-defined formant (resonant) frequencies, additional physical parameters such as formant transitions, the delay period in those transitions, energy bursts, the vibrations of the vocal cords occurring before and during the consonant burst, and the length of those vibrations are needed to identify them. Surprisingly, consonants are very quickly categorized through a quite mysterious form of invariant feature ex-traction. In contrast to consonants, vowels can be represented in a simple and transparent manner and that is because, amazingly, only two analog dimensions within a continuous space are essen-tially enough to characterize a vowel. The first dimension corresponds to the degree to which the vocal tract is open when producing the vowel and the second dimension is the location of the main occlusion. Surprisingly, these anatomically-defined production modes match very precisely the first two acoustically-defined formant frequencies, namely F1 and F2. While for some languages some additional features are necessary to specify a vowel, such as its length or roundedness, whose nature may be more discrete, for many others F1 and F2 are all there is to it. In this thesis, we use both behavioral (phoneme confusion frequencies) and neural measures (the spatio- temporal distribution of phoneme-evoked neural activation) to study the cross-linguistic organization of phoneme perception. In Chapter 2, we study the perception of consonants by repli-cating and extending a classical study on sub-phonemic features underlying perceptual differences between phonemes. Comparing the responses of native listeners to that of Italian, Turkish, Hebrew, and (Argentinian) Spanish listeners to a range of American English consonants, we look at the specific patterns of errors that speakers of different languages make by using the metric content index, which was previously used in entirely different contexts, with either discrete, e.g. in face space, or continuous representations, e.g. of the spatial environment. Beyond the analysis of percent correct score, and transmitted information, we frame the problem in terms of \u2018place attractors\u2019, in analogy to those which have been well studied in spatial memory. Through our experimental paradigm, we try to access distinct attractors in different languages. In the same chapter, we provide auditory evoked potentials of some consonant-vowel syllables, which hint at transparent processing of the vowels regulated by the first two formants that characterize them, and accordingly we then turn to investigating the vowel trajectories in the vowel manifold. We start our exploration of the vowel space in Chapter 3 by addressing a perceptually important third dimension for native Turkish speakers \u2013 that is rounding. Can native Turkish speakers navigate better vowel trajectories in which the second formant changes over a short time, to reflect rounding, compared to native Italian speakers, who are not required to make such fine discriminations on this dimension? We found no mother tongue effects. We have found, however, that rounding in vowels could be represented with similar efficiency by fine differences in a F2 peak frequency which is constant in time, or inverting the temporal dynamics of a changing F2, which then makes vowels not mere points in the space, but rather continuous trajectories.We walk through phoneme trajectories at every tens of milliseconds, it comes to us as nat-urally as walking in a room, if not more. Similar to spatial trajectories, we create equidistant continuous vowel trajectories in Chapter 4 on a vowel wheel positioned in the central region of the two-dimensional vowel space where in some languages like Italian there are no standard vowel categories, and in some other, like English, there are. Is the central region in languages like Italian to be regarded as a flat empty space with no attractors? Is there any reminiscence of their own phoneme memories? We ask whether this central region is flat, or can at least be flattened through extensive training. If so, would then we find a neural substrate that modulates the perception in the 2D vowel plane, similar to grid cell representation that is involved in the spatial navigation of empty 2D arenas? Our results are not suggestive of a grid-like representation, but rather points at the modulation of the neural signal by the position of Italian vowels around the outer contour of the wheel. Therefore in Chapter 5, we ask how our representation of the vowel space, not only in the central region but rather in the entirely of its linguistically relevant portion, is deformed by the presence of the standard categories of our vowel repertoire. We use \u2018belts\u2019, that are short stretches along which formant frequencies are varied quasi-continuously, to determine the local metric that best describes, for each language, the vowel manifold as a non-flat space constructed in our brain. As opposed to the \u2018consonant planes\u2019, that we constructed in Chapter 2, which appear to have a similar structure to a great extent, we find that the vowel plane is subjective and that it is language dependent. In light of language-specific transformations of the vowel plane, we wonder whether native bilinguals hold simultaneously multiple maps available and use one or the other to interpret linguistic sources depending on context. Or alternatively, we ask, do they construct and use a fusion of the two original maps, that allows them to efficiently discriminate vowel contrast that have to be discriminated in either language? The neural mechanisms underlying the physical map switch, known as remapping, have been well studied in rodent hippocampus; is the vowel map alternation governed by similar principles? We compare and show that the perceptual vowel maps of native Norwegian speakers, who are not bilingual but fluent in English, are unique, probably sculpted by their long-term memory codes, and we leave the curious case of bilinguals for future studies. Overall we attempt to investigate phoneme perception in a different framework compared to how it has been studied in the literature, which has been in the interest of a large community for many years, but largely disconnected from the study of cortical computation. Our aim is to demonstrate that insights about persisting questions in the field may be reached from another well explored part of cognition

    Gutturals in phonetic terms

    Get PDF
    “Guttural” is a vaguely or variably defined term in the phonology of ancient Semitic languages, especially Tiberian Hebrew. It can include laryngeals, pharyngeals, epiglottals, uvulars, and sometimes postvelars; pharyngealized emphatics should be covered too, though they are not; and often, inexplicably, all rhotics are included, even though only uvular ones should be eligible. In general, “guttural” seems to be a purely phonology-based concept, out of step with phonetic considerations. Sounds of speech, however, are more than abstract nodes in charts; they have material substance, which both affects and is affected by neighboring sounds. Over time, a secondary manifestation can assume the phonological position of a sound, gradually making the sound itself redundant and prone to disappearance. This may well have been the origin of the disputed Semitic *ġ, provided that a secondary articulation, velarization or possibly pharygealization, took over and became a full-fledged [ɣ]. If teachers employ the inherited term “gutturals”, they sometimes tend to present them as imposing [a]-vowels wherever possible. This is a phonetically unsubstantiated claim, as laryngeals impose no vocalic colour; uvulars and postvelars would enhance [o] and [u], if anything at all; epiglottals may front the back vowels (i.e. towards [e]) and lower only the front vowels; the inherent [ɑ]-colour of pharyngeals seems to lag behind rather than anticipate (which might be language-specific); and pharyngealized consonants, excluded from gutturals in any case, are observed to move vowels back rather than down. Articulations “behind the tongue”, so crucial for Semitic phonologies, present numerous complexities: difficult to observe, frequently substituting for one another, and involving issues of terminology as well as interpretation of scripts. Here too, modern phonetic studies can furnish acoustic and physiological data to support hypotheses about languages of the ancient world.51

    The impact of spectrally asynchronous delay on the intelligibility of conversational speech

    Get PDF
    Conversationally spoken speech is rampant with rapidly changing and complex acoustic cues that individuals are able to hear, process, and encode to meaning. For many hearing-impaired listeners, a hearing aid is necessary to hear these spectral and temporal acoustic cues of speech. For listeners with mild-moderate high frequency sensorineural hearing loss, open-fit digital signal processing (DSP) hearing aids are the most common amplification option. Open-fit DSP hearing aids introduce a spectrally asynchronous delay to the acoustic signal by allowing audible low frequency information to pass to the eardrum unimpeded while the aid delivers amplified high frequency sounds to the eardrum that has a delayed onset relative to the natural pathway of sound. These spectrally asynchronous delays may disrupt the natural acoustic pattern of speech. The primary goal of this study is to measure the effect of spectrally asynchronous delay on the intelligibility of conversational speech by normal-hearing and hearing-impaired listeners. A group of normal-hearing listeners (n = 25) and listeners with mild-moderate high frequency sensorineural hearing loss (n = 25) participated in this study. The acoustic stimuli included 200 conversationally-spoken recordings of the low predictability sentences from the revised speech perception in noise test (r-SPIN). These 200 sentences were modified to control for audibility for the hearing-impaired group and so that the acoustic energy above 2 kHz was delayed by either 0 ms (control), 4ms, 8ms, or 32 ms relative to the low frequency energy. The data were analyzed in order to find the effect of each of the four delay conditions on the intelligibility of the final key word of each sentence. Normal-hearing listeners were minimally affected by the asynchronous delay. However, the hearing-impaired listeners were deleteriously affected by increasing amounts of spectrally asynchronous delay. Although the hearing-impaired listeners performed well overall in their perception of conversationally spoken speech in quiet, the intelligibility of conversationally spoken sentences significantly decreased when the delay values were equal to or greater than 4 ms. Therefore, hearing aid manufacturers need to restrict the amount of delay introduced by DSP so that it does not distort the acoustic patterns of conversational speech

    A Study of the Assimilative Behavior of the Voiced Labio-Dental Fricative in American English

    Get PDF
    Gradation is one of the main features of colloquial speech. It implies the presence of certain phonological processes that ease the transition between phonemes with different articulatory features. For English, one of these implied processes is assimilation, which is when the articulation of a segment is modified into another one already existing in the system. Our study takes Gimson (1994)’s suggestion that /v/ assimilates into /m/ when it is followed by the bilabial nasal. After observing and describing different cases of assimilation, we suggest more possible explanations to this phenomenon and more assimilative behaviors of /v/. Therefore, we conduct an experiment with six American- English L1s where they evaluate sentences whose articulation includes our suggested proposals. The results show Gimson’s theory not to be as accurate as expected. Furthermore, we prove that /v/ can assimilate into /b/, /ɂ/ and /d/ when it is followed by bilabial, velar and alveolar phonemes.La gradación es una de las características más significativas del lenguaje coloquial. Esta implica la presencia de ciertos procesos fonológicos que facilitan la transición entre fonemas con distintas articulaciones. En el caso del inglés, uno de estos procesos es la asimilación, que consiste en cambiar la articulación de un segmento por la de otro existente en el sistema. Este estudio se basa en la propuesta de Gimson (1994), por la que /v/ se asimila a /m/ cuando le sigue la bilabial nasal. Tras observar y describir más casos de asimilación, nos planteamos distintos comportamientos asimilativos de /v/ en este y otros contextos, que fueron evaluados por medio de un experimento realizado a seis nativos de inglés-americano. Los resultados muestran que la teoría de Gimson no es tan apropiada como se esperaba. Además, concluimos que /v/ puede asimilar a /b/, /ɂ/ y /d/ cuando le siguen ciertos sonidos bilabiales, velares y alveolares.Grado en Estudios Inglese
    corecore