15 research outputs found

    An Artificial Intelligence Approach to Concatenative Sound Synthesis

    Get PDF
    Sound examples are included with this thesisTechnological advancement such as the increase in processing power, hard disk capacity and network bandwidth has opened up many exciting new techniques to synthesise sounds, one of which is Concatenative Sound Synthesis (CSS). CSS uses data-driven method to synthesise new sounds from a large corpus of small sound snippets. This technique closely resembles the art of mosaicing, where small tiles are arranged together to create a larger image. A ‘target’ sound is often specified by users so that segments in the database that match those of the target sound can be identified and then concatenated together to generate the output sound. Whilst the practicality of CSS in synthesising sounds currently looks promising, there are still areas to be explored and improved, in particular the algorithm that is used to find the matching segments in the database. One of the main issues in CSS is the basis of similarity, as there are many perceptual attributes which sound similarity can be based on, for example it can be based on timbre, loudness, rhythm, and tempo and so on. An ideal CSS system needs to be able to decipher which of these perceptual attributes are anticipated by the users and then accommodate them by synthesising sounds that are similar with respect to the particular attribute. Failure to communicate the basis of sound similarity between the user and the CSS system generally results in output that mismatches the sound which has been envisioned by the user. In order to understand how humans perceive sound similarity, several elements that affected sound similarity judgment were first investigated. Of the four elements tested (timbre, melody, loudness, tempo), it was found that the basis of similarity is dependent on humans’ musical training where musicians based similarity on the timbral information, whilst non-musicians rely on melodic information. Thus, for the rest of the study, only features that represent the timbral information were included, as musicians are the target user for the findings of this study. Another issue with the current state of CSS systems is the user control flexibility, in particular during segment matching, where features can be assigned with different weights depending on their importance to the search. Typically, the weights (in some existing CSS systems that support the weight assigning mechanism) can only be assigned manually, resulting in a process that is both labour intensive and time consuming. Additionally, another problem was identified in this study, which is the lack of mechanism to handle homosonic and equidistant segments. These conditions arise when too few features are compared causing otherwise aurally different sounds to be represented by the same sonic values, or can also be a result of rounding off the values of the features extracted. This study addresses both of these problems through an extended use of Artificial Intelligence (AI). The Analysis Hierarchy Process (AHP) is employed to enable order dependent features selection, allowing weights to be assigned for each audio feature according to their relative importance. Concatenation distance is used to overcome the issues with homosonic and equidistant sound segments. The inclusion of AI results in a more intelligent system that can better handle tedious tasks and minimize human error, allowing users (composers) to worry less of the mundane tasks, and focusing more on the creative aspects of music making. In addition to the above, this study also aims to enhance user control flexibility in a CSS system and improve similarity result. The key factors that affect the synthesis results of CSS were first identified and then included as parametric options which users can control in order to communicate their intended creations to the system to synthesise. Comprehensive evaluations were carried out to validate the feasibility and effectiveness of the proposed solutions (timbral-based features set, AHP, and concatenation distance). The final part of the study investigates the relationship between perceived sound similarity and perceived sound interestingness. A new framework that integrates all these solutions, the query-based CSS framework, was then proposed. The proof-of-concept of this study, ConQuer, was developed based on this framework. This study has critically analysed the problems in existing CSS systems. Novel solutions have been proposed to overcome them and their effectiveness has been tested and discussed, and these are also the main contributions of this study.Malaysian Minsitry of Higher Education, Universiti Putra Malaysi

    HMM-based speech synthesis using an acoustic glottal source model

    Get PDF
    Parametric speech synthesis has received increased attention in recent years following the development of statistical HMM-based speech synthesis. However, the speech produced using this method still does not sound as natural as human speech and there is limited parametric flexibility to replicate voice quality aspects, such as breathiness. The hypothesis of this thesis is that speech naturalness and voice quality can be more accurately replicated by a HMM-based speech synthesiser using an acoustic glottal source model, the Liljencrants-Fant (LF) model, to represent the source component of speech instead of the traditional impulse train. Two different analysis-synthesis methods were developed during this thesis, in order to integrate the LF-model into a baseline HMM-based speech synthesiser, which is based on the popular HTS system and uses the STRAIGHT vocoder. The first method, which is called Glottal Post-Filtering (GPF), consists of passing a chosen LF-model signal through a glottal post-filter to obtain the source signal and then generating speech, by passing this source signal through the spectral envelope filter. The system which uses the GPF method (HTS-GPF system) is similar to the baseline system, but it uses a different source signal instead of the impulse train used by STRAIGHT. The second method, called Glottal Spectral Separation (GSS), generates speech by passing the LF-model signal through the vocal tract filter. The major advantage of the synthesiser which incorporates the GSS method, named HTS-LF, is that the acoustic properties of the LF-model parameters are automatically learnt by the HMMs. In this thesis, an initial perceptual experiment was conducted to compare the LFmodel to the impulse train. The results showed that the LF-model was significantly better, both in terms of speech naturalness and replication of two basic voice qualities (breathy and tense). In a second perceptual evaluation, the HTS-LF system was better than the baseline system, although the difference between the two had been expected to be more significant. A third experiment was conducted to evaluate the HTS-GPF system and an improved HTS-LF system, in terms of speech naturalness, voice similarity and intelligibility. The results showed that the HTS-GPF system performed similarly to the baseline. However, the HTS-LF system was significantly outperformed by the baseline. Finally, acoustic measurements were performed on the synthetic speech to investigate the speech distortion in the HTS-LF system. The results indicated that a problem in replicating the rapid variations of the vocal tract filter parameters at transitions between voiced and unvoiced sounds is the most significant cause of speech distortion. This problem encourages future work to further improve the system

    Polyglot voice design for unit selection speech synthesis

    Get PDF
    Current text-to-speech (TTS) systems are increasingly faced with mixed language textual input. Most TTS systems are designed to allow building synthetic voices for different languages, but each voice is able to ”speak” only one language at a time. In order to synthesize mixed language input, polyglot voices are needed which are able to switch between languages when it is required by textual input. A polyglot voice will typically have one basic language and additionally the ability to synthesize foreign words when these are encountered in the textual input. Design of polyglot voices for unit selection speech synthesis is still a research question. An inherent problem of unit selection speech synthesis is that the synthesis quality is closely related to the contents of the unit database. Concatenation of units not in the database usually results in bad synthesis quality. At the same time, building the database with good coverage of units results in a prohibitively large database if the intended domain of synthesized text is unlimited. Polyglot databases have an additional problem that not only single language units have to be stored in the database, but also the concatenation points of words from foreign languages have to be accounted for. This exceeds the database size even more, so that it is worth exploring whether database size can be reduced by including only single language units in the database and handling multilingual units on synthesis time. The present work is concerned with database design for a polyglot unit selection voice. It’s main aim is to examine whether alternative methods for handling multilingual cross-word diphones result in same or better synthesis quality than including these diphones in the database. Three alternative approaches are suggested and model polyglot voices are built to test these methods. The languages included in the synthesizer are Bosnian, English and German. The output quality of the synthesized multilingual word boundary is tested on Bosnian-English and Bosnian-German word pairs in a perceptual experiment

    Fast Speech in Unit Selection Speech Synthesis

    Get PDF
    Moers-Prinz D. Fast Speech in Unit Selection Speech Synthesis. Bielefeld: Universität Bielefeld; 2020.Speech synthesis is part of the everyday life of many people with severe visual disabilities. For those who are reliant on assistive speech technology the possibility to choose a fast speaking rate is reported to be essential. But also expressive speech synthesis and other spoken language interfaces may require an integration of fast speech. Architectures like formant or diphone synthesis are able to produce synthetic speech at fast speech rates, but the generated speech does not sound very natural. Unit selection synthesis systems, however, are capable of delivering more natural output. Nevertheless, fast speech has not been adequately implemented into such systems to date. Thus, the goal of the work presented here was to determine an optimal strategy for modeling fast speech in unit selection speech synthesis to provide potential users with a more natural sounding alternative for fast speech output

    Experimental phonetic study of the timing of voicing in English obstruents

    Get PDF
    The treatment given to the timing of voicing in three areas of phonetic research -- phonetic taxonomy, speech production modelling, and speech synthesis -- Is considered in the light of an acoustic study of the timing of voicing in British English obstruents. In each case, it is found to be deficient. The underlying cause is the difficulty in applying a rigid segmental approach to an aspect of speech production characterised by important inter-articulator asynchronies, coupled to the limited quantitative data available concerning the systematic properties of the timing of voicing in languages. It is argued that the categories and labels used to describe the timing of voicing In obstruents are Inadequate for fulfilling the descriptive goals of phonetic theory. One possible alternative descriptive strategy is proposed, based on incorporating aspects of the parametric organisation of speech into the descriptive framework. Within the domain of speech production modelling, no satisfactory account has been given of fine-grained variability of the timing of voicing not capable of explanation in terms of general properties of motor programming and utterance execution. The experimental results support claims In the literature that the phonetic control of an utterance may be somewhat less abstract than has been suggestdd in some previous reports. A schematic outline is given, of one way in which the timing of voicing could be controlled in speech production. The success of a speech synthesis-by-rule system depends to a great extent on a comprehensive encoding of the systematic phonetic characteristics of the target language. Only limited success has been achieved in the past thirty years. A set of rules is proposed for generating more naturalistic patterns of voicing in obstruents, reflecting those observed in the experimental component of this study. Consideration Is given to strategies for evaluating the effect of fine-grained phonetic rules In speech synthesis

    Segmental foreign accent

    Get PDF
    200 p.Tradicionalmente, el acento extranjero se ha estudiado desde una perspectiva holística, es decir, tratándolo como un todo en lugar de como una serie de rasgos individuales que suceden simultáneamente. Los estudios previos que se han centrado en alguno de estos rasgos individuales lo han hecho generalmente en el plano suprasegmental (Tajima et al., 1997, Munro & Derwing, 2001, Hahn, 2004, etc.). En esta tesis se lleva a cabo un análisis del acento extranjero desde un punto de vista segmental. Considerando que no existe mucha investigación en este campo, nuestro principal objetivo es averiguar si los resultados de estudios holísticos previos pueden ser extrapolados al nivel segmental. Con el objetivo de analizar el nivel segmental en detalle, en esta tesis se presentan técnicas que hacen uso de nuevas tecnologías. Para recabar la mayor información posible, los experimentos perceptivos son llevados a cabo con oyentes con muy distintos perfiles lingüísticos en términos de primera lengua o conocimiento de la segunda lengua y comparados con la literatura existente. Nuestros resultados muestran que algunos efectos importantes relativos a la producción y percepción de segmentos acentuados pueden pasar inadvertidos en un análisis holístico y acreditan la necesidad de continuar realizando estudios de unidades mínimas para comprender en profundidad los efectos del acento extranjero en la comunicación

    Conditioning Text-to-Speech synthesis on dialect accent: a case study

    Get PDF
    Modern text-to-speech systems are modular in many different ways. In recent years, end-users gained the ability to control speech attributes such as degree of emotion, rhythm and timbre, along with other suprasegmental features. More ambitious objectives are related to modelling a combination of speakers and languages, e.g. to enable cross-speaker language transfer. Though, no prior work has been done on the more fine-grained analysis of regional accents. To fill this gap, in this thesis we present practical end-to-end solutions to synthesise speech while controlling within-country variations of the same language, and we do so for 6 different dialects of the British Isles. In particular, we first conduct an extensive study of the speaker verification field and tweak state-of-the-art embedding models to work with dialect accents. Then, we adapt standard acoustic models and voice conversion systems by conditioning them on dialect accent representations and finally compare our custom pipelines with a cutting-edge end-to-end architecture from the multi-lingual world. Results show that the adopted models are suitable and have enough capacity to accomplish the task of regional accent conversion. Indeed, we are able to produce speech closely resembling the selected speaker and dialect accent, where the most accurate synthesis is obtained via careful fine-tuning of the multi-lingual model to the multi-dialect case. Finally, we delineate limitations of our multi-stage approach and propose practical mitigations, to be explored in future work

    Segmental foreign accent

    Get PDF
    200 p.Tradicionalmente, el acento extranjero se ha estudiado desde una perspectiva holística, es decir, tratándolo como un todo en lugar de como una serie de rasgos individuales que suceden simultáneamente. Los estudios previos que se han centrado en alguno de estos rasgos individuales lo han hecho generalmente en el plano suprasegmental (Tajima et al., 1997, Munro & Derwing, 2001, Hahn, 2004, etc.). En esta tesis se lleva a cabo un análisis del acento extranjero desde un punto de vista segmental. Considerando que no existe mucha investigación en este campo, nuestro principal objetivo es averiguar si los resultados de estudios holísticos previos pueden ser extrapolados al nivel segmental. Con el objetivo de analizar el nivel segmental en detalle, en esta tesis se presentan técnicas que hacen uso de nuevas tecnologías. Para recabar la mayor información posible, los experimentos perceptivos son llevados a cabo con oyentes con muy distintos perfiles lingüísticos en términos de primera lengua o conocimiento de la segunda lengua y comparados con la literatura existente. Nuestros resultados muestran que algunos efectos importantes relativos a la producción y percepción de segmentos acentuados pueden pasar inadvertidos en un análisis holístico y acreditan la necesidad de continuar realizando estudios de unidades mínimas para comprender en profundidad los efectos del acento extranjero en la comunicación

    Do grafema ao gesto : contributos linguísticos para um sistema de síntese de base articulatória

    Get PDF
    Doutoramento em LinguísticaMotivados pelo propósito central de contribuir para a construção, a longo prazo, de um sistema completo de conversão de texto para fala, baseado em síntese articulatória, desenvolvemos um modelo linguístico para o português europeu (PE), com base no sistema TADA (TAsk Dynamic Application), que visou a obtenção automática da trajectória dos articuladores a partir do texto de entrada. A concretização deste objectivo ditou o desenvolvimento de um conjunto de tarefas, nomeadamente 1) a implementação e avaliação de dois sistemas de silabificação automática e de transcrição fonética, tendo em vista a transformação do texto de entrada num formato adequado ao TADA; 2) a criação de um dicionário gestual para os sons do PE, de modo a que cada fone obtido à saída do conversor grafema-fone pudesse ter correspondência com um conjunto de gestos articulatórios adaptados para o PE; 3) a análise do fenómeno da nasalidade à luz dos princípios dinâmicos da Fonologia Articulatória (FA), com base num estudo articulatório e perceptivo. Os dois algoritmos de silabificação automática implementados e testados fizeram apelo a conhecimentos de natureza fonológica sobre a estrutura da sílaba, sendo o primeiro baseado em transdutores de estados finitos e o segundo uma implementação fiel das propostas de Mateus & d'Andrade (2000). O desempenho destes algoritmos – sobretudo do segundo – mostrou-se similar ao de outros sistemas com as mesmas potencialidades. Quanto à conversão grafema-fone, seguimos uma metodologia baseada em regras de reescrita combinada com uma técnica de aprendizagem automática. Os resultados da avaliação deste sistema motivaram a exploração posterior de outros métodos automáticos, procurando também avaliar o impacto da integração de informação silábica nos sistemas. A descrição dinâmica dos sons do PE, ancorada nos princípios teóricos e metodológicos da FA, baseou-se essencialmente na análise de dados de ressonância magnética, a partir dos quais foram realizadas todas as medições, com vista à obtenção de parâmetros articulatórios quantitativos. Foi tentada uma primeira validação das várias configurações gestuais propostas, através de um pequeno teste perceptual, que permitiu identificar os principais problemas subjacentes à proposta gestual. Este trabalho propiciou, pela primeira vez para o PE, o desenvolvimento de um primeiro sistema de conversão de texto para fala, de base articulatória. A descrição dinâmica das vogais nasais contou, quer com os dados de ressonância magnética, para caracterização dos gestos orais, quer com os dados obtidos através de articulografia electromagnética (EMA), para estudo da dinâmica do velo e da sua relação com os restantes articuladores. Para além disso, foi efectuado um teste perceptivo, usando o TADA e o SAPWindows, para avaliar a sensibilidade dos ouvintes portugueses às variações na altura do velo e alterações na coordenação intergestual. Este estudo serviu de base a uma interpretação abstracta (em termos gestuais) das vogais nasais do PE e permitiu também esclarecer aspectos cruciais relacionados com a sua produção e percepção.Motivated by the central purpose of contributing for the construction, in the long term, of a complete text-to-speech system based in articulatory synthesis, we develop a linguistic model for European Portuguese (EP), based on TADA system (TAsk Dynamic Application), that aimed at the automatic attainment of the articulators trajectory from the input text. The specification of this purpose determined the development of a set of tasks, namely the 1) implementation and evaluation of two automatic syllabification systems and two grapheme-to-phoneme (G2P) conversion systems, in view of the transformation of the input in an appropriate format to the TADA; 2) the creation of a gestural database for the EP sounds, in so that each phone obtained at the output of the g2p system could have correspondence with a set of articulatory gestures adapted for EP; 3) the dynamic analysis of nasality, on the basis of an articulatory and perceptive study. The two automatic syllabification algorithms implemented and tested make appeal to phonological knowledge on the structure of the syllable, being the first one based in finite state transducers and the second one a faithful implementation of Mateus & d'Andrade (2000) proposals. The performance of these algorithms – especially the second - was similar to the one of other systems with the same potentialities. Regarding grapheme-to-phone conversion, we follow a methodology based on manual rules combined with an automatic learning technique. The evaluation results of this system motivated the exploitation of others automatic approaches, finding also to evaluate the impact of the syllabic information integration in the systems. The gestural description of the European Portuguese sounds, anchored on the theoretical and methodological tenets of the Articulatory Phonology, was based essentially on the analysis of magnetic resonance data (MRI), from which all the measurements were carried out, aiming to obtain the quantitative articulatory parameters. The several gestural configurations proposed have been validated, through a small perceptual test, which allowed identifying the main underlying problems of the gestural proposal. This work provided, for the first time to PE, the development of a first articulatory based text-to-speech system. The dynamic description of nasal vowels relied either on the magnetic resonance data, for characterization of the oral gestures, either on the data obtained through electromagnetic articulography (EMA), for the study of the velum dynamic and of its relation with the remaining articulators. Besides that, a perceptive test was performed, using TADA and SAPWindows, to evaluate the sensibility of the Portuguese listeners to the variations in the height of velum and alterations in the intergestural coordination. This study supported an abstract interpretation (in gestural terms) of the EP nasal vowels and allowed also to clarify crucial aspects related with its production and perception
    corecore