8,977 research outputs found
Assessing the treatment effects in apraxia of speech: introduction and evaluation of the Modified Diadochokinesis Test
Background: The number of reliable and valid instruments to measure the effects of therapy in apraxia of speech (AoS) is limited. Aims: To evaluate the newly developed Modified Diadochokinesis Test (MDT), which is a task to assess the effects of rate and rhythm therapies for AoS in a multiple baseline across behaviours design. Methods: The consistency, accuracy and fluency of speech of 24 adults with AoS and 12 unaffected speakers matched for age, gender and educational level were assessed using the MDT. The reliability and validity of the instrument were considered and outcomes compared with those obtained with existing tests. Results: The results revealed that MDT had a strong internal consistency. Scores were influenced by syllable structure complexity, while distinctive features of articulation had no measurable effect. The testretest and intra- and inter-rater reliabilities were shown to be adequate, and the discriminant validity was good. For convergent validity different outcomes were found: apart from one correlation, the scores on tests assessing functional communication and AoS correlated significantly with the MDT outcome measures. The spontaneous speech phonology measure of the Aachen Aphasia Test (AAT) correlated significantly with the MDT outcome measures, but no correlations were found for the repetition subtest and the spontaneous speech articulation/prosody measure of the AAT. Conclusions & Implications: The study shows that the MDT has adequate psychometric properties, implying that it can be used to measure changes in speech motor control during treatment for apraxia of speech. The results demonstrate the validity and utility of the instrument as a supplement to speech tasks in assessing speech improvement aimed at the level of planning and programming of speech
Unstressed Vowels in German Learner English: An Instrumental Study
This study investigates the production of vowels in unstressed syllables by advanced German learners of English in comparison with native speakers of Standard Southern British English. Two acoustic properties were measured: duration and formant structure. The results indicate that duration of unstressed vowels is similar in the two groups, though there is some variation depending on the phonetic context. In terms of formant structure, learners produce slightly higher F1 and considerably lower F2, the difference in F2 being statistically significant for each learner. Formant values varied as a function of context and orthographic representation of the vowel
Smoothie or Fruit Salad? Learners’ Descriptions of Accents as Windows to Concept Formation
This paper explores the linguistically naive descriptions which one set of EFL learners provided when identifying and describing accents. First and second-year English majors at a French university were asked to do two tasks. First, they listened to two extracts to determine whether the speaker’s accent sounded more British or American, and to explain which features helped them to decide. Later they answered two questions: a) What do you do when you want to sound more like an American? and b) more like a British person? The analysis of their answers highlights learners’ underlying representations of accents as well as concept formation in relation to English pronunciation. I argue that this cognitive aspect of L2 learning should be addressed explicitly in instruction
Dschang syllable structure
The syllable structure of Dschang is interesting for a variety of reasons. Most notable is
the aspiration which can appear on most consonant types, including voiced stops. I shall
argue that aspiration is best viewed as moraic, contributing to the weight of a syllable. An
understanding of the syllable structure also gives valuable insights into the phonemic
inventory and the distributional asymmetries, and helps to explain some curious
morphophonemic vowel alternations in the imperative construction
Assessment of severe apnoea through voice analysis, automatic speech, and speaker recognition techniques
The electronic version of this article is the complete one and can be found online at:
http://asp.eurasipjournals.com/content/2009/1/982531This study is part of an ongoing collaborative effort between the medical and the signal processing communities to promote research on applying standard Automatic Speech Recognition (ASR) techniques for the automatic diagnosis of patients with severe obstructive sleep apnoea (OSA). Early detection of severe apnoea cases is important so that patients can receive early treatment. Effective ASR-based detection could dramatically cut medical testing time. Working with a carefully designed speech database of healthy and apnoea subjects, we describe an acoustic search for distinctive apnoea voice characteristics. We also study abnormal nasalization in OSA patients by modelling vowels in nasal and nonnasal phonetic contexts using Gaussian Mixture Model (GMM) pattern recognition on speech spectra. Finally, we present experimental findings regarding the discriminative power of GMMs applied to severe apnoea detection. We have achieved an 81% correct classification rate, which is very promising and underpins the interest in this line of inquiry.The activities described in this paper were funded by the Spanish Ministry of Science and Technology as part of the TEC2006-13170-C02-02 Project
Auditory communication in domestic dogs: vocal signalling in the extended social environment of a companion animal
Domestic dogs produce a range of vocalisations, including barks, growls, and whimpers, which are shared with other canid species. The source–filter model of vocal production can be used as a theoretical and applied framework to explain how and why the acoustic properties of some vocalisations are constrained by physical characteristics of the caller, whereas others are more dynamic, influenced by transient states such as arousal or motivation. This chapter thus reviews how and why particular call types are produced to transmit specific types of information, and how such information may be perceived by receivers. As domestication is thought to have caused a divergence in the vocal behaviour of dogs as compared to the ancestral wolf, evidence of both dog–human and human–dog communication is considered. Overall, it is clear that domestic dogs have the potential to acoustically broadcast a range of information, which is available to conspecific and human receivers. Moreover, dogs are highly attentive to human speech and are able to extract speaker identity, emotional state, and even some types of semantic information
Spelling, phonology and etymology in Hittite historical linguistics, a review article on Kloekhorst, A. Etymological Dictionary of the Hittite Inherited Lexicon (Leiden: 2008)
This review article addresses the representation of glottal stops in Akkadian and Hittite cuneiform
Speech intelligibility in multilingual spaces
This thesis examines speech intelligibility and multi-lingual communication, in terms of
acoustics and perceptual factors. More specifically, the work focused on the impact of
room acoustic conditions on the speech intelligibility of four languages representative of
a wide range of linguistic properties (English, Polish, Arabic and Mandarin). Firstly,
diagnostic rhyme tests (DRT), phonemically balanced (PB) word lists and phonemically
balanced sentence lists have been compared under four room acoustic conditions
defined by their speech transmission index (STI = 0.2, 0.4, 0.6 and 0.8). The results
obtained indicated that there was a statistically significant difference between the word
intelligibility scores of languages under all room acoustic conditions, apart from the STI
= 0.8 condition. English was the most intelligible language under all conditions, and
differences with other languages were larger when conditions were poor (maximum
difference of 29% at STI = 0.2, 33% at STI = 0.4 and 14% at STI = 0.6). Results also
showed that Arabic and Polish were particularly sensitive to background noise, and that
Mandarin was significantly more intelligible than those languages at STI = 0.4.
Consonant-to-vowel ratios and languages’ distinctive features and acoustical properties
explained some of the scores obtained. Sentence intelligibility scores confirmed
variations between languages, but these variations were statistically significant only at
the STI = 0.4 condition (sentence tests being less sensitive to very good and very poor
room acoustic conditions). Additionally, perceived speech intelligibility and soundscape
perception associated to these languages was also analysed in three multi-lingual
environments: an airport check-in area, a hospital reception area, and a café. Semantic
differential analysis showed that perceived speech intelligibility of each language varies
with the type of environment, as well as the type of background noise, reverberation
time, and signal-to-noise ratio. Variations between the perceived speech intelligibility of
the four languages were only marginally significant (p = 0.051), unlike objective
intelligibility results. Perceived speech intelligibility of English appeared to be mostly
affected negatively by the information content and distracting sounds present in the
background noise. Lastly, the study investigated several standards and design guidelines
and showed how adjustments could be made to recommended STI values in order to
achieve consistent speech intelligibility ratings across languages
- …