17 research outputs found

    SPPAS: a tool for the phonetic segmentations of Speech

    No full text
    International audienceSPPAS is a tool to produce automatic annotations which include utterance, word, syllabic and phonemic segmentations from a recorded speech sound and its transcription. SPPAS is distributed under the terms of the GNU Public License. It was successfully applied during the Evalita 2011 campaign, on Italian map-task dialogues. It can also deal with French, English and Chinese and there is an easy way to add other languages. The paper describes the development of resources and free tools, consisting of acoustic models, phonetic dictionaries, and libraries and programs to deal with these data. All of them are publicly available

    Developing Resources for Automated Speech Processing of Quebec French

    Get PDF
    International audienceThe analysis of the structure of speech nearly always rests on the alignment of the speech recording with a phonetic transcription. Nowadays several tools can perform this speech segmentation automatically. However, none of them carries out the automatic segmentation of Quebec French (QF hereafter) in a proper way. Contrary to what could be assumed, the acoustics and phonotactics of QF differs widely from that of France French (FF hereafter). To adequately segment QF, features like diphthongization of long vowels and affrication of coronal stops have to be taken into account. Thus acoustic models for automatic segmentation must be trained on speech samples exhibiting those phenomena. Dictionaries and lexicons must also be adapted and integrate differences in lexical units (such as very frequent words in QF that are not used in FF) and in the phonology of QF (such as the existence of tense and lax high vowels in QF but not in FF). This paper presents the development of linguistic resources to be included into the SPPAS software tool in order to get Text normalization, Phonetization, Alignment and Syllabification. We adapted the existing French lexicon and developed a QF-specific pronunciation dictionary. We then created an acoustic model from the existing ones and adapted it with 5 minutes of manually time-aligned data. These new resources are all freely distributed with SPPAS version 2.7; they perform the full process of speech segmentation in Quebec French

    Voice Onset Time Enhanced User System (VOTEUS): a web graphic interface for the analysis of plosives’ release phases

    Get PDF
    The paper proposes an up-to-date literature review of the works using AutoVOT, a discriminative large-margin learning algorithm developed for the semi-automatic measurement of voice onset times. In order to expand the accessibility of the tool in linguistic research, we present VOTEUS, a user-friendly graphic interface written in Python. The interface is conceived to assist the researcher throughout the whole process of annotation, from the forced alignment of the corpora to the refinement of the AutoVOT tier and the extraction of the durations. The general aim is to speed up this phase of data analysis, providing a significant improvement on prevalent practice to date

    Unsupervised Phoneme Segmentation Based on Main Energy Change for Arabic Speech, Journal of Telecommunications and Information Technology, 2017, nr 1

    Get PDF
    In this paper, a new method for segmenting speech at the phoneme level is presented. For this purpose, author uses the short-time Fourier transform of the speech signal. The goal is to identify the locations of main energy changes in frequency over time, which can be described as phoneme boundaries. A frequency range analysis and search for energy changes in individual area is applied to obtain further precision to identify speech segments that carry out vowel and consonant segment confined in small number of narrow spectral areas. This method merely utilizes the power spectrum of the signal for segmentation. There is no need for any adaptation of the parameters or training for different speakers in advance. In addition, no transcript information, neither any prior linguistic knowledge about the phonemes is needed, or voiced/unvoiced decision making is required. Segmentation results with proposed method have been compared with a manual segmentation, and compared with three same kinds of segmentation methods. These results show that 81% of the boundaries are successfully identified. This research aims to improve the acoustic parameters for all the processing systems of the Arab speech

    Prosodic and Voice Quality Cross-Language Analysis of Storytelling Expressive Categories Oriented to Text-To-Speech Synthesis

    Get PDF
    Durant segles, la interpretació oral de contes i històries ha sigut una tradició mundial lligada a l’entreteniment, la educació, i la perpetuació de la cultura. En les últimes dècades, alguns treballs s’han centrat en analitzar aquest estil de parla ric en matisos expressius caracteritzats per determinats patrons acústics. En relació a això, també hi ha hagut un interès creixent en desenvolupar aplicacions de contar contes, com ara les de contacontes interactius. Aquesta tesi està orientada a millorar aspectes claus d’aquest tipus d’aplicacions: millorar la naturalitat de la parla sintètica expressiva a partir d’analitzar la parla de contacontes en detall, a més a més de proporcionar un millor llenguatge no verbal a un avatar parlant mitjançant la sincronització de la parla i els gestos. Per aconseguir aquests objectius és necessari comprendre les característiques acústiques d’aquest estil de parla i la interacció de la parla i els gestos. Pel que fa a característiques acústiques de la parla de contacontes, la literatura relacionada ha treballat en termes de prosòdia, mentre que només ha estat suggerit que la qualitat de la veu pot jugar un paper important per modelar les subtileses d’aquest estil. En aquesta tesi, el paper tant de la prosòdia com de la qualitat de la veu en l’estil indirecte de la parla de contacontes en diferents idiomes és analitzat per identificar les principal categories expressives que la composen i els paràmetres acústics que les caracteritzen. Per fer-ho, es proposa una metodologia d’anotació per aquest estil de parla a nivell de oració basada en modes de discurs dels contes (mode narratiu, descriptiu, i diàleg), introduint a més sub-modes narratius. Considerant aquesta metodologia d’anotació, l’estil indirecte d’una història orientada a una audiència jove (cobrint versions en castellà, anglès, francès, i alemany) és analitzat en termes de prosòdia i qualitat de la veu mitjançant anàlisis estadístics i discriminants, després de classificar els àudios de les oracions de la història en les seves categories expressives. Els resultats confirmen l’existència de les categories de contes amb diferències expressives subtils en tots els idiomes més enllà dels estils personals dels narradors. En aquest sentit, es presenten evidències que suggereixen que les categories expressives dels contes es transmeten amb matisos expressius més subtils que en les emocions bàsiques, després de comparar els resultats obtinguts amb aquells de parla emocional. Els anàlisis també mostren que la prosòdia i la qualitat de la veu contribueixen pràcticament de la mateixa manera a l’hora de discriminar entre les categories expressives dels contes, les quals son expressades amb patrons acústics similars en tots els idiomes analitzats. Cal destacar també la gran relació observada en la selecció de categoria per cada oració que han fet servir els diferents narradors encara quan, que sapiguem, no se’ls hi va donar cap indicació. Per poder traslladar totes aquestes categories a un sistema de text a parla basat en corpus, caldria enregistrar un corpus per cada categoria. No obstant, crear diferents corpus ad-hoc esdevé un tasca molt laboriosa. En la tesi, s’introdueix una alternativa basada en una metodologia d’anàlisi orientada a síntesi dissenyada per derivar models de regles des de un petit però representatiu conjunt d’oracions, que poden poder ser utilitzats per generar parla amb estil de contacontes a partir de parla neutra. Els experiments sobre suspens creixent com a prova de concepte mostren la viabilitat de la proposta en termes de naturalitat i similitud respecte un narrador de contes real. Finalment, pel que fa a interacció entre parla i gestos, es realitza un anàlisi de sincronia i èmfasi orientat a controlar un avatar de contacontes en 3D. Al tal efecte, es defineixen indicadors de força tant per els gestos com per la parla. Després de validar-los amb tests perceptius, una regla d’intensitat s’obté de la seva correlació. A més a més, una regla de sincronia es deriva per determinar correspondències temporals entre els gestos i la parla. Aquests anàlisis s’han dut a terme sobre interpretacions neutres i agressives per part d’un actor per cobrir un gran rang de nivells d’èmfasi, com a primer pas per avaluar la integració d’un avatar parlant després del sistema de text a parla.Durante siglos, la interpretación oral de cuentos e historias ha sido una tradición mundial ligada al entretenimiento, la educación, y la perpetuación de la cultura. En las últimas décadas, algunos trabajos se han centrado en analizar este estilo de habla rico en matices expresivos caracterizados por determinados patrones acústicos. En relación a esto, también ha habido un interés creciente en desarrollar aplicaciones de contar cuentos, como las de cuentacuentos interactivos. Esta tesis está orientada a mejorar aspectos claves de este tipo de aplicaciones: mejorar la naturalidad del habla sintética expresiva a partir de analizar el habla de cuentacuentos en detalle, además de proporcionar un mejor lenguaje no verbal a un avatar parlante mediante la sincronización del habla y los gestos. Para conseguir estos objetivos es necesario comprender las características acústicas de este estilo de habla y la interacción del habla y los gestos. En cuanto a características acústicas del habla de narradores de cuentos, la literatura relacionada ha trabajado en términos de prosodia, mientras que sólo ha sido sugerido que la calidad de la voz puede jugar un papel importante para modelar las sutilezas de este estilo. En esta tesis, el papel tanto de la prosodia como de la calidad de la voz en el estilo indirecto del habla de cuentacuentos en diferentes idiomas es analizado para identificar las principales categorías expresivas que componen este estilo de habla y los parámetros acústicos que las caracterizan. Para ello, se propone una metodología de anotación a nivel de oración basada en modos de discurso de los cuentos (modo narrativo, descriptivo, y diálogo), introduciendo además sub-modos narrativos. Considerando esta metodología de anotación, el estilo indirecto de una historia orientada a una audiencia joven (cubriendo versiones en castellano, inglés, francés, y alemán) es analizado en términos de prosodia y calidad de la voz mediante análisis estadísticos y discriminantes, después de clasificar los audios de las oraciones de la historia en sus categorías expresivas. Los resultados confirman la existencia de las categorías de cuentos con diferencias expresivas sutiles en todos los idiomas más allá de los estilos personales de los narradores. En este sentido, se presentan evidencias que sugieren que las categorías expresivas de los cuentos se transmiten con matices expresivos más sutiles que en las emociones básicas, tras comparar los resultados obtenidos con aquellos de habla emocional. Los análisis también muestran que la prosodia y la calidad de la voz contribuyen prácticamente de la misma manera a la hora de discriminar entre las categorías expresivas de los cuentos, las cuales son expresadas con patrones acústicos similares en todos los idiomas analizados. Cabe destacar también la gran relación observada en la selección de categoría para cada oración que han utilizado los diferentes narradores aun cuando, que sepamos, no se les dio ninguna indicación. Para poder trasladar todas estas categorías a un sistema de texto a habla basado en corpus, habría que grabar un corpus para cada categoría. Sin embargo, crear diferentes corpus ad-hoc es una tarea muy laboriosa. En la tesis, se introduce una alternativa basada en una metodología de análisis orientada a síntesis diseñada para derivar modelos de reglas desde un pequeño pero representativo conjunto de oraciones, que pueden ser utilizados para generar habla de cuentacuentos a partir de neutra. Los experimentos sobre suspense creciente como prueba de concepto muestran la viabilidad de la propuesta en términos de naturalidad y similitud respecto a un narrador de cuentos real. Finalmente, en cuanto a interacción entre habla y gestos, se realiza un análisis de sincronía y énfasis orientado a controlar un avatar cuentacuentos en 3D. Al tal efecto, se definen indicadores de fuerza tanto para gestos como para habla. Después de validarlos con tests perceptivos, una regla de intensidad se obtiene de su correlación. Además, una regla de sincronía se deriva para determinar correspondencias temporales entre los gestos y el habla. Estos análisis se han llevado a cabo sobre interpretaciones neutras y agresivas por parte de un actor para cubrir un gran rango de niveles de énfasis, como primer paso para evaluar la integración de un avatar parlante después del sistema de texto a habla.For ages, the oral interpretation of tales and stories has been a worldwide tradition tied to entertainment, education, and perpetuation of culture. During the last decades, some works have focused on the analysis of this particular speaking style rich in subtle expressive nuances represented by specific acoustic cues. In line with this fact, there has also been a growing interest in the development of storytelling applications, such as those related to interactive storytelling. This thesis deals with one of the key aspects of audiovisual storytellers: improving the naturalness of the expressive synthetic speech by analysing the storytelling speech in detail, together with providing better non-verbal language to a speaking avatar by synchronizing that speech with its gestures. To that effect, it is necessary to understand in detail the acoustic characteristics of this particular speaking style and the interaction between speech and gestures. Regarding the acoustic characteristics of storytelling speech, the related literature has dealt with the acoustic analysis of storytelling speech in terms of prosody, being only suggested that voice quality may play an important role for the modelling of its subtleties. In this thesis, the role of both prosody and voice quality in indirect storytelling speech is analysed across languages to identify the main expressive categories it is composed of together with the acoustic parameters that characterize them. To do so, an analysis methodology is proposed to annotate this particular speaking style at the sentence level based on storytelling discourse modes (narrative, descriptive, and dialogue), besides introducing narrative sub-modes. Considering this annotation methodology, the indirect speech of a story oriented to a young audience (covering the Spanish, English, French, and German versions) is analysed in terms of prosody and voice quality through statistical and discriminant analyses, after classifying the sentence-level utterances of the story in their corresponding expressive categories. The results confirm the existence of storytelling categories containing subtle expressive nuances across the considered languages beyond narrators' personal styles. In this sense, evidences are presented suggesting that such storytelling expressive categories are conveyed with subtler speech nuances than basic emotions by comparing their acoustic patterns to the ones obtained from emotional speech data. The analyses also show that both prosody and voice quality contribute almost equally to the discrimination among storytelling expressive categories, being conveyed with similar acoustic patterns across languages. It is also worth noting the strong relationship observed in the selection of the expressive category per utterance across the narrators even when, up to our knowledge, no previous indications were given to them. In order to translate all these expressive categories to a corpus-based Text-To-Speech system, the recording of a speech corpus for each category would be required. However, building ad-hoc speech corpora for each and every specific expressive style becomes a very daunting task. In this work, we introduce an alternative based on an analysis-oriented-to-synthesis methodology designed to derive rule-based models from a small but representative set of utterances, which can be used to generate storytelling speech from neutral speech. The experiments conducted on increasing suspense as a proof of concept show the viability of the proposal in terms of naturalness and storytelling resemblance. Finally, in what concerns the interaction between speech and gestures, an analysis is performed in terms of time and emphasis oriented to drive a 3D storytelling avatar. To that effect, strength indicators are defined for speech and gestures. After validating them through perceptual tests, an intensity rule is obtained from their correlation. Moreover, a synchrony rule is derived to determine temporal correspondences between speech and gestures. These analyses have been conducted on aggressive and neutral performances to cover a broad range of emphatic levels as a first step to evaluate the integration of a speaking avatar after the expressive Text-To-Speech system

    Sociophonetics and class differentiation: A study of working- and middle- class English in Cape Town's coloured community

    Get PDF
    Includes bibliographical references.This thesis provides a detailed acoustic description of the phonetic variation and changes evident in the monophthongal vowel system of Coloured South African English in Cape Town. The changes are largely a result of South Africa's post-apartheid socio-educational reform. A detailed acoustic description highlights the most salient changes (compared with earlier reports of the variety), indicating the extent of the change amongst working-class and middle-class speakers. The fieldwork conducted for this study consists of sociolinguistic interviews, conducted with a total of 40 Coloured speakers (half male, half female) from both working-class and middle-class backgrounds. All speakers were young adults, born between 1983 and 1993, thus raised and schooled in a period of transition from apartheid to democracy. Each of the middle-class speakers had some experience of attending formerly exclusively White schools, giving them significant contact with White peers and teachers, while the educational careers of the working-class speakers exposed them almost solely to Coloured peers and educators. The acoustic data were processed using methods of Forced Alignment and automatic formant extraction – methods applied for the first time to any variety of South African English. The results of the analysis were found generally to support the findings of scholars who have documented this variety previously, with some notable exceptions amongst middle-class speakers. The changes are attributable to socio-educational change in the post-apartheid setting and the directionality of the changes approximate trends amongst White South African English speakers. The TRAP, GOOSE and FOOT lexical sets show most change: TRAP is lowering, while GOOSE and FOOT are fronting. Although the changes approximate the vowel quality used by White speakers, middle-class Coloured speakers use an intermediate value between White speakers and working-class Coloured speakers i.e. they have not fully adopted White norms for any of the vowel classes. Working-class speakers were found to have maintained the monophthongal vowel system traditionally used by Coloured speakers

    Speech verification for computer assisted pronunciation training

    Get PDF
    Computer assisted pronunciation training (CAPT) is an approach that uses computer technology and computer-based resources in teaching and learning pronunciation. It is also part of computer assisted language learning (CALL) technology that has been widely applied to online learning platforms in the past years. This thesis deals with one of the central tasks in CAPT, i.e. speech veri- fication. The goal is to provide a framework that identifies pronunciation errors in speech data of second language (L2) learners and generates feedback with information and instruction for error correction. Furthermore, the framework is supposed to support the adaptation to new L1-L2 language pairs with minimal adjustment and modification. The central result is a novel approach to L2 speech verification, which combines both modern language technologies and linguistic expertise. For pronunciation verification, we select a set of L2 speech data, create alias phonemes from the errors annotated by linguists, then train an acoustic model with mixed L2 and gold standard data and perform HTK phoneme recognition to identify the error phonemes. For prosody verification, FD-PSOLA and Dynamic time warping are both applied to verify the differences in duration, pitch and stress. Feedback is generated for both verifications. Our feedback is presented to learners not only visually as with other existing CAPT systems, but also perceptually by synthesizing the learner’s own audio, e.g. for prosody verification, the gold standard prosody is transplanted onto the learner’s own voice. The framework is self-adaptable under semi-supervision, and requires only a certain amount of mixed gold standard and annotated L2 speech data for boot- strapping. Verified speech data is validated by linguists, annotated in case of wrong verification, and used in the next iteration of training. Mary Annotation Tool (MAT) is developed as an open-source component of MARYTTS for both annotating and validating. To deal with uncertain pauses and interruptions in L2 speech, the silence model in HTK is also adapted, and used in all components of the framework where forced alignment is required. Various evaluations are conducted that help us obtain insights into the applicability and potential of our CAPT system. The pronunciation verification shows high accuracy in both precision and recall, and encourages us to acquire more error-annotated L2 speech data to enhance the trained acoustic model. To test the effect of feedback, a progressive evaluation is carried out and it shows that our perceptual feedback helps learners realize their errors, which they could not otherwise observe from visual feedback and textual instructions. In order to im- prove the user interface, a questionnaire is also designed to collect the learners’ experiences and suggestions.Computer Assisted Pronunciation Training (CAPT) ist ein Ansatz, der mittels Computer und computergestützten Ressourcen das Erlernen der korrekten Aussprache im Fremdsprachenunterricht erleichtert. Dieser Ansatz ist ein Teil der Computer Assisted Language Learning (CALL) Technologie, die seit mehreren Jahren auf Online-Lernplattformen häufig zum Einsatz kommt. Diese Arbeit ist der Sprachverifikation gewidmet, einer der zentralen Aufgaben innerhalb des CAPT. Das Ziel ist, ein Framework zur Identifikation von Aussprachefehlern zu entwickeln fürMenschen, die eine Fremdsprache (L2-Sprache) erlernen. Dabei soll Feedback mit fehlerspezifischen Informationen und Anweisungen für eine richtige Aussprache erzeugt werden. Darüber hinaus soll das Rahmenwerk die Anpassung an neue Sprachenpaare (L1-L2) mit minimalen Adaptationen und Modifikationen unterstützen. Das zentrale Ergebnis ist ein neuartiger Ansatz für die L2-Sprachprüfung, der sowohl auf modernen Sprachtechnologien als auch auf corpuslinguistischen Ansätzen beruht. Für die Ausspracheüberprüfung erstellen wir Alias-Phoneme aus Fehlern, die von Linguisten annotiert wurden. Dann trainieren wir ein akustisches Modell mit gemischten L2- und Goldstandarddaten und führen eine HTK-Phonemerkennung3 aus, um die Fehlerphoneme zu identifizieren. Für die Prosodieüberprüfung werden sowohl FD-PSOLA4 und Dynamic Time Warping angewendet, um die Unterschiede in der Dauer, Tonhöhe und Betonung zwischen dem Gesprochenen und dem Goldstandard zu verifizieren. Feedbacks werden für beide Überprüfungen generiert und den Lernenden nicht nur visuell präsentiert, so wie in anderen vorhandenen CAPT-Systemen, sondern auch perzeptuell vorgestellt. So wird unter anderem für die Prosodieverifikation die Goldstandardprosodie auf die eigene Stimme des Lernenden übergetragen. Zur Anpassung des Frameworks an weitere L1-L2 Sprachdaten muss das System über Maschinelles Lernen trainiert werden. Da es sich um ein semi-überwachtes Lernverfahren handelt, sind nur eine gewisseMenge an gemischten Goldstandardund annotierten L2-Sprachdaten für das Bootstrapping erforderlich. Verifizierte Sprachdaten werden von Linguisten validiert, im Falle einer falschen Verifizierung nochmals annotiert, und bei der nächsten Iteration des Trainings verwendet. Für die Annotation und Validierung wurde das Mary Annotation Tool (MAT) als Open-Source-Komponente von MARYTTS entwickelt. Um mit unsicheren Pausen und Unterbrechungen in der L2-Sprache umzugehen, wurde auch das sogenannte Stillmodell in HTK angepasst und in allen Komponenten des Rahmenwerks verwendet, in denen Forced Alignment erforderlich ist. Unterschiedliche Evaluierungen wurden durchgeführt, um Erkenntnisse über die Anwendungspotenziale und die Beschränkungen des Systems zu gewinnen. Die Ausspracheüberprüfung zeigt eine hohe Genauigkeit sowohl bei der Präzision als auch beim Recall. Dadurch war es möglich weitere fehlerbehaftete L2-Sprachdaten zu verwenden, um somit das trainierte akustische Modell zu verbessern. Um die Wirkung des Feedbacks zu testen, wird eine progressive Auswertung durchgeführt. Das Ergebnis zeigt, dass perzeptive Feedbacks dabei helfen, dass die Lernenden sogar Fehler erkennen, die sie nicht aus visuellen Feedbacks und Textanweisungen beobachten können. Zudem wurden mittels Fragebogen die Erfahrungen und Anregungen der Benutzeroberfläche der Lernenden gesammelt, um das System künftig zu verbessern. 3 Hidden Markov Toolkit 4 Pitch Synchronous Overlap and Ad

    Prosodic detail in Neapolitan Italian

    Get PDF
    Recent findings on phonetic detail have been taken as supporting exemplar-based approaches to prosody. Through four experiments on both production and perception of both melodic and temporal detail in Neapolitan Italian, we show that prosodic detail is not incompatible with abstractionist approaches either. Specifically, we suggest that the exploration of prosodic detail leads to a refined understanding of the relationships between the richly specified and continuous varying phonetic information on one side, and coarse phonologically structured contrasts on the other, thus offering insights on how pragmatic information is conveyed by prosody

    Prosodic detail in Neapolitan Italian

    Get PDF
    Recent findings on phonetic detail have been taken as supporting exemplar-based approaches to prosody. Through four experiments on both production and perception of both melodic and temporal detail in Neapolitan Italian, we show that prosodic detail is not incompatible with abstractionist approaches either. Specifically, we suggest that the exploration of prosodic detail leads to a refined understanding of the relationships between the richly specified and continuous varying phonetic information on one side, and coarse phonologically structured contrasts on the other, thus offering insights on how pragmatic information is conveyed by prosody
    corecore