465 research outputs found

    ACCDIST: A Metric for comparing speakers' accents

    Get PDF
    This paper introduces a new metric for the quantitative assessment of the similarity of speakers' accents. The ACCDIST metric is based on the correlation of inter-segment distance tables across speakers or groups. Basing the metric on segment similarity within a speaker ensures that it is sensitive to the speaker's pronunciation system rather than to his or her voice characteristics. The metric is shown to have an error rate of only 11% on the accent classification of speakers into 14 English regional accents of the British Isles, half the error rate of a metric based on spectral information directly. The metric may also be useful for cluster analysis of accent groups

    Formant frequencies of vowels in 13 accents of the British Isles

    Get PDF
    International audienceThis study is a formant-based investigation of the vowels of male speakers in 13 accents of the British Isles. It provides F1/F2 graphs (obtained with a semi-automatic method) which could be used as starting points for more thorough analyses. The article focuses on both phonetic realization and systemic phenomena, and it also provides detailed information on automatic formant measurements. The aim is to obtain an up-to-date picture of within-and between-accent vowel variation in the British Isles. F1/F2 graphs plot z-scored Bark-transformed formant frequencies, and values in Hertz are also provided. Along with the findings, a number of methodological issues are addressed

    Acoustic model selection using limited data for accent robust speech recognition

    Get PDF
    This paper investigates techniques to compensate for the effects of regional accents of British English on automatic speech recognition (ASR) performance. Given a small amount of speech from a new speaker, is it better to apply speaker adaptation, or to use accent identification (AID) to identify the speaker’s accent followed by accent-dependent ASR? Three approaches to accent-dependent modelling are investigated: using the ‘correct’ accent model, choosing a model using supervised (ACCDIST-based) accent identifi- cation (AID), and building a model using data from neighbouring speakers in ‘AID space’. All of the methods outperform the accentindependent model, with relative reductions in ASR error rate of up to 44%. Using on average 43s of speech to identify an appropriate accent-dependent model outperforms using it for supervised speaker-adaptation, by 7%

    An integrated dialect analysis tool using phonetics and acoustics

    Get PDF
    This study aimed to verify a computational phonetic and acoustic analysis tool created in the MATLAB environment. A dataset was obtained containing 3 broad American dialects (Northern, Western and New England) from the TIMIT database using words that also appeared in the Swadesh list. Each dialect consisted of 20 speakers uttering 10 sentences. Verification using phonetic comparisons between dialects was made by calculating the Levenshtein distance in Gabmap and the proposed software tool. Agreement between the linguistic distances using each analysis method was found. Each tool showed increasing linguistic distance as a function of increasing geographic distance, in a similar shape to Seguy's curve. The proposed tool was then further developed to include acoustic characterisation capability of inter dialect dynamics. Significant variation between dialects was found for the pitch, trajectory length and spectral rate of change for 7 of the phonetic vowels investigated. Analysis of the vowel area using the 4 corner vowels indicated that for male speakers, geographically closer dialects have smaller variations in vowel space area than those further apart. The female utterances did not show a similar pattern of linguistic distance likely due to the lack of one corner vowel /u/, making the vowel space a triangle

    The impact of voice on trust attributions

    Get PDF
    Trust and speech are both essential aspects of human interaction. On the one hand, trust is necessary for vocal communication to be meaningful. On the other hand, humans have developed a way to infer someone’s trustworthiness from their voice, as well as to signal their own. Yet, research on trustworthiness attributions to speakers is scarce and contradictory, and very often uses explicit data, which do not predict actual trusting behaviour. However, measuring behaviour is very important to have an actual representation of trust. This thesis contains 5 experiments aimed at examining the influence of various voice characteristics — including accent, prosody, emotional expression and naturalness — on trusting behaviours towards virtual players and robots. The experiments have the "investment game"—a method derived from game theory, which allows to measure implicit trustworthiness attributions over time — as their main methodology. Results show that standard accents, high pitch, slow articulation rate and smiling voice generally increase trusting behaviours towards a virtual agent, and a synthetic voice generally elicits higher trustworthiness judgments towards a robot. The findings also suggest that different voice characteristics influence trusting behaviours with different temporal dynamics. Furthermore, the actual behaviour of the various speaking agents was modified to be more or less trustworthy, and results show that people’s trusting behaviours develop over time accordingly. Also, people reinforce their trust towards speakers that they deem particularly trustworthy when these speakers are indeed trustworthy, but punish them when they are not. This suggests that people’s trusting behaviours might also be influenced by the congruency of their first impressions with the actual experience of the speaker’s trustworthiness — a "congruency effect". This has important implications in the context of Human–Machine Interaction, for example for assessing users’ reactions to speaking machines which might not always function properly. Taken together, the results suggest that voice influences trusting behaviour, and that first impressions of a speaker’s trustworthiness based on vocal cues might not be indicative of future trusting behaviours, and that trust should be measured dynamically

    Acoustic Approaches to Gender and Accent Identification

    Get PDF
    There has been considerable research on the problems of speaker and language recognition from samples of speech. A less researched problem is that of accent recognition. Although this is a similar problem to language identification, di�erent accents of a language exhibit more fine-grained di�erences between classes than languages. This presents a tougher problem for traditional classification techniques. In this thesis, we propose and evaluate a number of techniques for gender and accent classification. These techniques are novel modifications and extensions to state of the art algorithms, and they result in enhanced performance on gender and accent recognition. The first part of the thesis focuses on the problem of gender identification, and presents a technique that gives improved performance in situations where training and test conditions are mismatched. The bulk of this thesis is concerned with the application of the i-Vector technique to accent identification, which is the most successful approach to acoustic classification to have emerged in recent years. We show that it is possible to achieve high accuracy accent identification without reliance on transcriptions and without utilising phoneme recognition algorithms. The thesis describes various stages in the development of i-Vector based accent classification that improve the standard approaches usually applied for speaker or language identification, which are insu�cient. We demonstrate that very good accent identification performance is possible with acoustic methods by considering di�erent i-Vector projections, frontend parameters, i-Vector configuration parameters, and an optimised fusion of the resulting i-Vector classifiers we can obtain from the same data. We claim to have achieved the best accent identification performance on the test corpus for acoustic methods, with up to 90% identification rate. This performance is even better than previously reported acoustic-phonotactic based systems on the same corpus, and is very close to performance obtained via transcription based accent identification. Finally, we demonstrate that the utilization of our techniques for speech recognition purposes leads to considerably lower word error rates. Keywords: Accent Identification, Gender Identification, Speaker Identification, Gaussian Mixture Model, Support Vector Machine, i-Vector, Factor Analysis, Feature Extraction, British English, Prosody, Speech Recognition
    • …
    corecore