869 research outputs found

    Singing synthesis with an evolved physical model

    Get PDF
    A two-dimensional physical model of the human vocal tract is described. Such a system promises increased realism and control in the synthesis. of both speech and singing. However, the parameters describing the shape of the vocal tract while in use are not easily obtained, even using medical imaging techniques, so instead a genetic algorithm (GA) is applied to the model to find an appropriate configuration. Realistic sounds are produced by this method. Analysis of these, and the reliability of the technique (convergence properties) is provided

    Features selection by genetic algorithm optimization with k-nearest neighbour and learning ensemble to predict Parkinson disease

    Get PDF
    Among the several ways followed for detecting Parkinson's disease, there is the one based on the speech signal, which is a symptom of this disease. In this paper focusing on the signal analysis, a data of voice records has been used. In these records, the patients were asked to utter vowels “a”, “o”, and “u”. Discrete wavelet transforms (DWT) applied to the speech signal to fetch the variable resolution that could hide the most important information about the patients. From the approximation a3 obtained by Daubechies wavelet at the scale 2 level 3, 21 features have been extracted: a linear predictive coding (LPC), energy, zero-crossing rate (ZCR), mel frequency cepstral coefficient (MFCC), and wavelet Shannon entropy. Then for the classification, the K-nearest neighbour (KNN) has been used. The KNN is a type of instance-based learning that can make a decision based on approximated local functions, besides the ensemble learning. However, through the learning process, the choice of the training features can have a significant impact on overall the process. So, here it stands out the role of the genetic algorithm (GA) to select the best training features that give the best accurate classification

    Evaluation of preprocessors for neural network speaker verification

    Get PDF

    A Soft Computing Based Approach for Multi-Accent Classification in IVR Systems

    Get PDF
    A speaker's accent is the most important factor affecting the performance of Natural Language Call Routing (NLCR) systems because accents vary widely, even within the same country or community. This variation also occurs when non-native speakers start to learn a second language, the substitution of native language phonology being a common process. Such substitution leads to fuzziness between the phoneme boundaries and phoneme classes, which reduces out-of-class variations, and increases the similarities between the different sets of phonemes. Thus, this fuzziness is the main cause of reduced NLCR system performance. The main requirement for commercial enterprises using an NLCR system is to have a robust NLCR system that provides call understanding and routing to appropriate destinations. The chief motivation for this present work is to develop an NLCR system that eliminates multilayered menus and employs a sophisticated speaker accent-based automated voice response system around the clock. Currently, NLCRs are not fully equipped with accent classification capability. Our main objective is to develop both speaker-independent and speaker-dependent accent classification systems that understand a caller's query, classify the caller's accent, and route the call to the acoustic model that has been thoroughly trained on a database of speech utterances recorded by such speakers. In the field of accent classification, the dominant approaches are the Gaussian Mixture Model (GMM) and Hidden Markov Model (HMM). Of the two, GMM is the most widely implemented for accent classification. However, GMM performance depends on the initial partitions and number of Gaussian mixtures, both of which can reduce performance if poorly chosen. To overcome these shortcomings, we propose a speaker-independent accent classification system based on a distance metric learning approach and evolution strategy. This approach depends on side information from dissimilar pairs of accent groups to transfer data points to a new feature space where the Euclidean distances between similar and dissimilar points are at their minimum and maximum, respectively. Finally, a Non-dominated Sorting Evolution Strategy (NSES)-based k-means clustering algorithm is employed on the training data set processed by the distance metric learning approach. The main objectives of the NSES-based k-means approach are to find the cluster centroids as well as the optimal number of clusters for a GMM classifier. In the case of a speaker-dependent application, a new method is proposed based on the fuzzy canonical correlation analysis to find appropriate Gaussian mixtures for a GMM-based accent classification system. In our proposed method, we implement a fuzzy clustering approach to minimize the within-group sum-of-square-error and canonical correlation analysis to maximize the correlation between the speech feature vectors and cluster centroids. We conducted a number of experiments using the TIMIT database, the speech accent archive, and the foreign accent English databases for evaluating the performance of speaker-independent and speaker-dependent applications. Assessment of the applications and analysis shows that our proposed methodologies outperform the HMM, GMM, vector quantization GMM, and radial basis neural networks

    Euclidean distances as measures of speaker similarity including identical twin pairs: a forensic investigation using source and filter voice characteristics

    Get PDF
    AbstractThere is a growing consensus that hybrid approaches are necessary for successful speaker characterization in Forensic Speaker Comparison (FSC); hence this study explores the forensic potential of voice features combining source and filter characteristics. The former relate to the action of the vocal folds while the latter reflect the geometry of the speaker’s vocal tract. This set of features have been extracted from pause fillers, which are long enough for robust feature estimation while spontaneous enough to be extracted from voice samples in real forensic casework. Speaker similarity was measured using standardized Euclidean Distances (ED) between pairs of speakers: 54 different-speaker (DS) comparisons, 54 same-speaker (SS) comparisons and 12 comparisons between monozygotic twins (MZ). Results revealed that the differences between DS and SS comparisons were significant in both high quality and telephone-filtered recordings, with no false rejections and limited false acceptances; this finding suggests that this set of voice features is highly speaker-dependent and therefore forensically useful. Mean ED for MZ pairs lies between the average ED for SS comparisons and DS comparisons, as expected according to the literature on twin voices. Specific cases of MZ speakers with very high ED (i.e. strong dissimilarity) are discussed in the context of sociophonetic and twin studies. A preliminary simplification of the Vocal Profile Analysis (VPA) Scheme is proposed, which enables the quantification of voice quality features in the perceptual assessment of speaker similarity, and allows for the calculation of perceptual–acoustic correlations. The adequacy of z-score normalization for this study is also discussed, as well as the relevance of heat maps for detecting the so-called phantoms in recent approaches to the biometric menagerie

    Models and analysis of vocal emissions for biomedical applications: 5th International Workshop: December 13-15, 2007, Firenze, Italy

    Get PDF
    The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies. The Workshop has the sponsorship of: Ente Cassa Risparmio di Firenze, COST Action 2103, Biomedical Signal Processing and Control Journal (Elsevier Eds.), IEEE Biomedical Engineering Soc. Special Issues of International Journals have been, and will be, published, collecting selected papers from the conference

    Infant Cry Signal Processing, Analysis, and Classification with Artificial Neural Networks

    Get PDF
    As a special type of speech and environmental sound, infant cry has been a growing research area covering infant cry reason classification, pathological infant cry identification, and infant cry detection in the past two decades. In this dissertation, we build a new dataset, explore new feature extraction methods, and propose novel classification approaches, to improve the infant cry classification accuracy and identify diseases by learning infant cry signals. We propose a method through generating weighted prosodic features combined with acoustic features for a deep learning model to improve the performance of asphyxiated infant cry identification. The combined feature matrix captures the diversity of variations within infant cries and the result outperforms all other related studies on asphyxiated baby crying classification. We propose a non-invasive fast method of using infant cry signals with convolutional neural network (CNN) based age classification to diagnose the abnormality of infant vocal tract development as early as 4-month age. Experiments discover the pattern and tendency of the vocal tract changes and predict the abnormality of infant vocal tract by classifying the cry signals into younger age category. We propose an approach of generating hybrid feature set and using prior knowledge in a multi-stage CNNs model for robust infant sound classification. The dominant and auxiliary features within the set are beneficial to enlarge the coverage as well as keeping a good resolution for modeling the diversity of variations within infant sound and the experimental results give encouraging improvements on two relative databases. We propose an approach of graph convolutional network (GCN) with transfer learning for robust infant cry reason classification. Non-fully connected graphs based on the similarities among the relevant nodes are built to consider the short-term and long-term effects of infant cry signals related to inner-class and inter-class messages. With as limited as 20% of labeled training data, our model outperforms that of the CNN model with 80% labeled training data in both supervised and semi-supervised settings. Lastly, we apply mel-spectrogram decomposition to infant cry classification and propose a fusion method to further improve the infant cry classification performance

    Adding expressiveness to unit selection speech synthesis and to numerical voice production

    Get PDF
    La parla és una de les formes de comunicació més naturals i directes entre éssers humans, ja que codifica un missatge i també claus paralingüístiques sobre l’estat emocional del locutor, el to o la seva intenció, esdevenint així fonamental en la consecució d’una interacció humà-màquina (HCI) més natural. En aquest context, la generació de parla expressiva pel canal de sortida d’HCI és un element clau en el desenvolupament de tecnologies assistencials o assistents personals entre altres aplicacions. La parla sintètica pot ser generada a partir de parla enregistrada utilitzant mètodes basats en corpus com la selecció d’unitats (US), que poden aconseguir resultats d’alta qualitat però d’expressivitat restringida a la pròpia del corpus. A fi de millorar la qualitat de la sortida de la síntesi, la tendència actual és construir bases de dades de veu cada cop més grans, seguint especialment l’aproximació de síntesi anomenada End-to-End basada en tècniques d’aprenentatge profund. Tanmateix, enregistrar corpus ad-hoc per cada estil expressiu desitjat pot ser extremadament costós o fins i tot inviable si el locutor no és capaç de realitzar adequadament els estils requerits per a una aplicació donada (ex: cant en el domini de la narració de contes). Alternativament, nous mètodes basats en la física de la producció de veu s’han desenvolupat a la darrera dècada gràcies a l’increment en la potència computacional. Per exemple, vocals o diftongs poden ser obtinguts utilitzant el mètode d’elements finits (FEM) per simular la propagació d’ones acústiques a través d’una geometria 3D realista del tracte vocal obtinguda a partir de ressonàncies magnètiques (MRI). Tanmateix, atès que els principals esforços en aquests mètodes de producció numèrica de veu s’han focalitzat en la millora del modelat del procés de generació de veu, fins ara s’ha prestat poca atenció a la seva expressivitat. A més, la col·lecció de dades per aquestes simulacions és molt costosa, a més de requerir un llarg postprocessament manual com el necessari per extreure geometries 3D del tracte vocal a partir de MRI. L’objectiu de la tesi és afegir expressivitat en un sistema que genera veu neutra, sense haver d’adquirir dades expressives del locutor original. Per un costat, s’afegeixen capacitats expressives a un sistema de conversió de text a parla basat en selecció d’unitats (US-TTS) dotat d’un corpus de veu neutra, per adreçar necessitats específiques i concretes en l’àmbit de la narració de contes, com són la veu cantada o situacions de suspens. A tal efecte, la veu és parametritzada utilitzant un model harmònic i transformada a l’estil expressiu desitjat d’acord amb un sistema expert. Es presenta una primera aproximació, centrada en la síntesi de suspens creixent per a la narració de contes, i es demostra la seva viabilitat pel que fa a naturalitat i qualitat de narració de contes. També s’afegeixen capacitats de cant al sistema US-TTS mitjançant la integració de mòduls de transformació de parla a veu cantada en el pipeline del TTS, i la incorporació d’un mòdul de generació de prosòdia expressiva que permet al mòdul de US seleccionar unitats més properes a la prosòdia cantada obtinguda a partir de la partitura d’entrada. Això resulta en un framework de síntesi de conversió de text a parla i veu cantada basat en selecció d’unitats (US-TTS&S) que pot generar veu parlada i cantada a partir d'un petit corpus de veu neutra (~2.6h). D’acord amb els resultats objectius, l’estratègia de US guiada per la partitura permet reduir els factors de modificació de pitch requerits per produir veu cantada a partir de les unitats de veu parlada seleccionades, però en canvi té una efectivitat limitada amb els factors de modificació de les durades degut a la curta durada de les vocals parlades neutres. Els resultats dels tests perceptius mostren que tot i òbviament obtenir una naturalitat inferior a la oferta per un sintetitzador professional de veu cantada, el framework pot adreçar necessitats puntuals de veu cantada per a la síntesis de narració de contes amb una qualitat raonable. La incorporació d’expressivitat s’investiga també en la simulació numèrica 3D de vocals basada en FEM mitjançant modificacions de les senyals d’excitació glotal utilitzant una aproximació font-filtre de producció de veu. Aquestes senyals es generen utilitzant un model Liljencrants-Fant (LF) controlat amb el paràmetre de forma del pols Rd, que permet explorar el continu de fonació lax-tens a més del rang de freqüències fonamentals, F0, de la veu parlada. S’analitza la contribució de la font glotal als modes d’alt ordre en la síntesis FEM de les vocals cardinals [a], [i] i [u] mitjançant la comparació dels valors d’energia d’alta freqüència (HFE) obtinguts amb geometries realistes i simplificades del tracte vocal. Les simulacions indiquen que els modes d’alt ordre es preveuen perceptivament rellevants d’acord amb valors de referència de la literatura, particularment per a fonacions tenses i/o F0s altes. En canvi, per a vocals amb una fonació laxa i/o F0s baixes els nivells d’HFE poden resultar inaudibles, especialment si no hi ha soroll d’aspiració en la font glotal. Després d’aquest estudi preliminar, s’han analitzat les característiques d’excitació de vocals alegres i agressives d’un corpus paral·lel de veu en castellà amb l’objectiu d’incorporar aquests estils expressius de veu tensa en la simulació numèrica de veu. Per a tal efecte, s’ha usat el vocoder GlottDNN per analitzar variacions d’F0 i pendent espectral relacionades amb l’excitació glotal en vocals [a]. Aquestes variacions es mapegen mitjançant la comparació amb vocals sintètiques en valors d’F0 i Rd per simular vocals que s’assemblin als estils alegre i agressiu. Els resultats mostren que és necessari incrementar l’F0 i disminuir l’Rd respecte la veu neutra, amb variacions majors per a alegre que per agressiu, especialment per a vocals accentuades. Els resultats aconseguits en les investigacions realitzades validen la possibilitat d’afegir expressivitat a la síntesi basada en corpus US-TTS i a la simulació numèrica de veu basada en FEM. Tanmateix, encara hi ha marge de millora. Per exemple, l’estratègia aplicada a la producció numèrica de veu es podria millorar estudiant i desenvolupant mètodes de filtratge invers així com incorporant modificacions del tracte vocal, mentre que el framework US-TTS&S es podria beneficiar dels avenços en tècniques de transformació de veu incloent transformacions de la qualitat de veu, aprofitant l’experiència adquirida en la simulació numèrica de vocals expressives.El habla es una de las formas de comunicación más naturales y directas entre seres humanos, ya que codifica un mensaje y también claves paralingüísticas sobre el estado emocional del locutor, el tono o su intención, convirtiéndose así en fundamental en la consecución de una interacción humano-máquina (HCI) más natural. En este contexto, la generación de habla expresiva para el canal de salida de HCI es un elemento clave en el desarrollo de tecnologías asistenciales o asistentes personales entre otras aplicaciones. El habla sintética puede ser generada a partir de habla gravada utilizando métodos basados en corpus como la selección de unidades (US), que pueden conseguir resultados de alta calidad, pero de expresividad restringida a la propia del corpus. A fin de mejorar la calidad de la salida de la síntesis, la tendencia actual es construir bases de datos de voz cada vez más grandes, siguiendo especialmente la aproximación de síntesis llamada End-to-End basada en técnicas de aprendizaje profundo. Sin embargo, gravar corpus ad-hoc para cada estilo expresivo deseado puede ser extremadamente costoso o incluso inviable si el locutor no es capaz de realizar adecuadamente los estilos requeridos para una aplicación dada (ej: canto en el dominio de la narración de cuentos). Alternativamente, nuevos métodos basados en la física de la producción de voz se han desarrollado en la última década gracias al incremento en la potencia computacional. Por ejemplo, vocales o diptongos pueden ser obtenidos utilizando el método de elementos finitos (FEM) para simular la propagación de ondas acústicas a través de una geometría 3D realista del tracto vocal obtenida a partir de resonancias magnéticas (MRI). Sin embargo, dado que los principales esfuerzos en estos métodos de producción numérica de voz se han focalizado en la mejora del modelado del proceso de generación de voz, hasta ahora se ha prestado poca atención a su expresividad. Además, la colección de datos para estas simulaciones es muy costosa, además de requerir un largo postproceso manual como el necesario para extraer geometrías 3D del tracto vocal a partir de MRI. El objetivo de la tesis es añadir expresividad en un sistema que genera voz neutra, sin tener que adquirir datos expresivos del locutor original. Per un lado, se añaden capacidades expresivas a un sistema de conversión de texto a habla basado en selección de unidades (US-TTS) dotado de un corpus de voz neutra, para abordar necesidades específicas y concretas en el ámbito de la narración de cuentos, como son la voz cantada o situaciones de suspense. Para ello, la voz se parametriza utilizando un modelo harmónico y se transforma al estilo expresivo deseado de acuerdo con un sistema experto. Se presenta una primera aproximación, centrada en la síntesis de suspense creciente para la narración de cuentos, y se demuestra su viabilidad en cuanto a naturalidad y calidad de narración de cuentos. También se añaden capacidades de canto al sistema US-TTS mediante la integración de módulos de transformación de habla a voz cantada en el pipeline del TTS, y la incorporación de un módulo de generación de prosodia expresiva que permite al módulo de US seleccionar unidades más cercanas a la prosodia cantada obtenida a partir de la partitura de entrada. Esto resulta en un framework de síntesis de conversión de texto a habla y voz cantada basado en selección de unidades (US-TTS&S) que puede generar voz hablada y cantada a partir del mismo pequeño corpus de voz neutra (~2.6h). De acuerdo con los resultados objetivos, la estrategia de US guiada por la partitura permite reducir los factores de modificación de pitch requeridos para producir voz cantada a partir de las unidades de voz hablada seleccionadas, pero en cambio tiene una efectividad limitada con los factores de modificación de duraciones debido a la corta duración de las vocales habladas neutras. Los resultados de las pruebas perceptivas muestran que, a pesar de obtener una naturalidad obviamente inferior a la ofrecida por un sintetizador profesional de voz cantada, el framework puede abordar necesidades puntuales de voz cantada para la síntesis de narración de cuentos con una calidad razonable. La incorporación de expresividad se investiga también en la simulación numérica 3D de vocales basada en FEM mediante modificaciones en las señales de excitación glotal utilizando una aproximación fuente-filtro de producción de voz. Estas señales se generan utilizando un modelo Liljencrants-Fant (LF) controlado con el parámetro de forma del pulso Rd, que permite explorar el continuo de fonación laxo-tenso además del rango de frecuencias fundamentales, F0, de la voz hablada. Se analiza la contribución de la fuente glotal a los modos de alto orden en la síntesis FEM de las vocales cardinales [a], [i] y [u] mediante la comparación de los valores de energía de alta frecuencia (HFE) obtenidos con geometrías realistas y simplificadas del tracto vocal. Las simulaciones indican que los modos de alto orden se prevén perceptivamente relevantes de acuerdo con valores de referencia de la literatura, particularmente para fonaciones tensas y/o F0s altas. En cambio, para vocales con una fonación laxa y/o F0s bajas los niveles de HFE pueden resultar inaudibles, especialmente si no hay ruido de aspiración en la fuente glotal. Después de este estudio preliminar, se han analizado las características de excitación de vocales alegres y agresivas de un corpus paralelo de voz en castellano con el objetivo de incorporar estos estilos expresivos de voz tensa en la simulación numérica de voz. Para ello, se ha usado el vocoder GlottDNN para analizar variaciones de F0 y pendiente espectral relacionadas con la excitación glotal en vocales [a]. Estas variaciones se mapean mediante la comparación con vocales sintéticas en valores de F0 y Rd para simular vocales que se asemejen a los estilos alegre y agresivo. Los resultados muestran que es necesario incrementar la F0 y disminuir la Rd respecto la voz neutra, con variaciones mayores para alegre que para agresivo, especialmente para vocales acentuadas. Los resultados conseguidos en las investigaciones realizadas validan la posibilidad de añadir expresividad a la síntesis basada en corpus US-TTS y a la simulación numérica de voz basada en FEM. Sin embargo, hay margen de mejora. Por ejemplo, la estrategia aplicada a la producción numérica de voz se podría mejorar estudiando y desarrollando métodos de filtrado inverso, así como incorporando modificaciones del tracto vocal, mientras que el framework US-TTS&S desarrollado se podría beneficiar de los avances en técnicas de transformación de voz incluyendo transformaciones de la calidad de la voz, aprovechando la experiencia adquirida en la simulación numérica de vocales expresivas.Speech is one of the most natural and direct forms of communication between human beings, as it codifies both a message and paralinguistic cues about the emotional state of the speaker, its mood, or its intention, thus becoming instrumental in pursuing a more natural Human Computer Interaction (HCI). In this context, the generation of expressive speech for the HCI output channel is a key element in the development of assistive technologies or personal assistants among other applications. Synthetic speech can be generated from recorded speech using corpus-based methods such as Unit-Selection (US), which can achieve high quality results but whose expressiveness is restricted to that available in the speech corpus. In order to improve the quality of the synthesis output, the current trend is to build ever larger speech databases, especially following the so-called End-to-End synthesis approach based on deep learning techniques. However, recording ad-hoc corpora for each and every desired expressive style can be extremely costly, or even unfeasible if the speaker is unable to properly perform the styles required for a given application (e.g., singing in the storytelling domain). Alternatively, new methods based on the physics of voice production have been developed in the last decade thanks to the increase in computing power. For instance, vowels or diphthongs can be obtained using the Finite Element Method (FEM) to simulate the propagation of acoustic waves through a 3D realistic vocal tract geometry obtained from Magnetic Resonance Imaging (MRI). However, since the main efforts in these numerical voice production methods have been focused on improving the modelling of the voice generation process, little attention has been paid to its expressiveness up to now. Furthermore, the collection of data for such simulations is very costly, besides requiring manual time-consuming postprocessing like that needed to extract 3D vocal tract geometries from MRI. The aim of the thesis is to add expressiveness into a system that generates neutral voice, without having to acquire expressive data from the original speaker. One the one hand, expressive capabilities are added to a Unit-Selection Text-to-Speech (US-TTS) system fed with a neutral speech corpus, to address specific and timely needs in the storytelling domain, such as for singing or in suspenseful situations. To this end, speech is parameterised using a harmonic-based model and subsequently transformed to the target expressive style according to an expert system. A first approach dealing with the synthesis of storytelling increasing suspense shows the viability of the proposal in terms of naturalness and storytelling quality. Singing capabilities are also added to the US-TTS system through the integration of Speech-to-Singing (STS) transformation modules into the TTS pipeline, and by incorporating an expressive prosody generation module that allows the US to select units closer to the target singing prosody obtained from the input score. This results in a Unit Selection based Text-to-Speech-and-Singing (US-TTS&S) synthesis framework that can generate both speech and singing from the same neutral speech small corpus (~2.6 h). According to the objective results, the score-driven US strategy can reduce the pitch scaling factors required to produce singing from the selected spoken units, but its effectiveness is limited regarding the time-scale requirements due to the short duration of the spoken vowels. Results from the perceptual tests show that although the obtained naturalness is obviously far from that given by a professional singing synthesiser, the framework can address eventual singing needs for synthetic storytelling with a reasonable quality. The incorporation of expressiveness is also investigated in the 3D FEM-based numerical simulation of vowels through modifications of the glottal flow signals following a source-filter approach of voice production. These signals are generated using a Liljencrants-Fant (LF) model controlled with the glottal shape parameter Rd, which allows exploring the tense-lax continuum of phonation besides the spoken vocal range of fundamental frequency values, F0. The contribution of the glottal source to higher order modes in the FEM synthesis of cardinal vowels [a], [i] and [u] is analysed through the comparison of the High Frequency Energy (HFE) values obtained with realistic and simplified 3D geometries of the vocal tract. The simulations indicate that higher order modes are expected to be perceptually relevant according to reference values stated in the literature, particularly for tense phonations and/or high F0s. Conversely, vowels with a lax phonation and/or low F0s can result in inaudible HFE levels, especially if aspiration noise is not present in the glottal source. After this preliminary study, the excitation characteristics of happy and aggressive vowels from a Spanish parallel speech corpus are analysed with the aim of incorporating this tense voice expressive styles into the numerical production of voice. To that effect, the GlottDNN vocoder is used to analyse F0 and spectral tilt variations associated with the glottal excitation on vowels [a]. These variations are mapped through the comparison with synthetic vowels into F0 and Rd values to simulate vowels resembling happy and aggressive styles. Results show that it is necessary to increase F0 and decrease Rd with respect to neutral speech, with larger variations for happy than aggressive style, especially for the stressed [a] vowels. The results achieved in the conducted investigations validate the possibility of adding expressiveness to both corpus-based US-TTS synthesis and FEM-based numerical simulation of voice. Nevertheless, there is still room for improvement. For instance, the strategy applied to the numerical voice production could be improved by studying and developing inverse filtering approaches as well as incorporating modifications of the vocal tract, whereas the developed US-TTS&S framework could benefit from advances in voice transformation techniques including voice quality modifications, taking advantage of the experience gained in the numerical simulation of expressive vowels
    corecore