11 research outputs found

    Extraction of Temporal Patterns in Multi-rate and Multi-modal Datasets

    Get PDF
    International audienceWe focus on the problem of analyzing corpora composed of irregularly sampled (multi-rate) heterogeneous temporal data. We propose a novel convolutive multi-rate factorization model for extracting multi-modal patterns from such multi-rate data. Our model builds up on previously proposed multi-view (coupled) nonnegative matrix factor-ization techniques, and extends them by accounting for heterogeneous sample rates and enabling the patterns to have a duration. We illustrate the proposed methodology on the joint study of audiovisual data for speech analysis

    Analysis of Interference between Electromagnetic Articulography and Electroglottograph Systems

    Get PDF
    Electromagnetic Articulography (EMA) has become an integral tool for researchers and clinicians who seek to characterize speech kinematics. The position and orientation of the articulators – which include the jaw, lips, and tongue – are recorded by attaching sensors to the articulators and tracking the movement of the sensors through an electromagnetic field. This has been used by researchers and clinicians to better understand dysarthria and synthesize speech, among other applications. Another speech tool, electroglottography (EGG), is used to analyze the movement of the vocal folds during speech production. This is achieved by measuring the time variation of the contact of the vocal folds and analyzing it with regards to the speech produced. Clinically, EGG is used to identify voice abnormalities, including those without visual or acoustic abnormalities. These systems are not used concurrently because of the electromagnetic field used with the EMA system; NDI and Carstens affirm that metal should be kept out of the field during EMA use. Concurrent use of these systems would lead to simultaneous measurements of the laryngeal and upper airway-articulatory abilities, which could increase understanding of motor speech issues. Parameters derived from the EGG signal could also be incorporated with articulatory parameters to improve synthesized voice quality and synthesize a more realistic voice. The objective of this research is to investigate whether the interference present between the EMA and EGG systems is significant and, if so, to characterize it so the systems can be used simultaneously. Analysis of the interference was obtained through several data collections. The first assessed the degree of interference when the EMA sensors were stationary. The second data set was collected using a model that maintained sensor orientation while the sensors were at a nonzero speed. The final data set was obtained with a model that in which the sensors were in a fixed position while changing the orientation of the sensors, with minimal translational velocity. Sources of interference that were present included the EGG system and orthodontia. The resulting data led to the conclusion that the presence of the EGG or the orthodontic appliances does not cause significant interference

    Software Tools and Analysis Methods for the Use of Electromagnetic Articulography Data in Speech Research

    Get PDF
    Recent work with Electromagnetic Articulography (EMA) has shown it to be an excellent tool for characterizing speech kinematics. By tracking the position and orientation of sensors placed on the jaws, lips, teeth and tongue as they move in an electromagnetic field, information about movement and coordination of the articulators can be obtained with great time resolution. This technique has far-reaching applications for advancing fields related to speech articulation, including recognition, synthesis, motor learning, and clinical assessments. As more EMA data becomes widely available, a growing need exists for software that performs basic processing and analysis functions. The objective of this work is to create and demonstrate the use of new software tools that make full use of the information provided in EMA datasets, with a goal of maximizing the impact of EMA research. A new method for biteplate-correcting orientation data is presented, allowing orientation data to be used for articulatory analysis. Two examples of applications using orientation data are presented: a tool for jaw-angle measurement using a single EMA sensor, and a tongue interpolation tool based on three EMA sensors attached to the tongue. The results demonstrate that combined position and orientation data give a more complete picture of articulation than position data alone, and that orientation data should be incorporated in future work with EMA. A new standalone, GUI-based software tool is also presented for visualization of EMA data. It includes simultaneous real-time playback of kinematic and acoustic data, as well as basic analysis capabilities for both types of data. A comparison of the visualization tool to existing EMA software shows that it provides superior visualization and comparable analysis features to existing software. The tool will be included with the Marquette University EMA-MAE database to aid researchers working with this dataset

    リアルタイムMRI動画日本語調音運動データベースの設計

    Get PDF
    National Institute for Japanese Language and LinguisticsNational Institute for Japanese Language and LinguisticsWaseda UniversityNational Institute for Japanese Language and LinguisticsATR-PromotionsATR-PromotionsChiba Institute of TechnologyKonan UniversityTakushoku UniversityNational Institute for Japanese Language and LinguisticsNational Institute for Japanese Language and LinguisticsWaseda UniversityWaseda UniversityPicolab会議名: 言語資源活用ワークショップ2020, 開催地: オンライン, 会期: 2020年9月8日−9日, 主催: 国立国語研究所 コーパス開発センターわれわれは、日本語に関する調音音声学的研究の新しいインフラ提供をめざして、調音運動データベースの構築を進めてきている。声道全体の形状変化を毎秒14 ないし25 フレームのリアルタイムMRI 動画として記録したデータが、現時点で東京方言16 名、近畿方言5名分収録済である。1 名あたりの発話量は25〜30 分である。データには個々の発話の開始時刻と終了時刻のタグが付与されており、他に発話内容と話者に関するメタデータを検索に利用できる

    A Multimodal real-time MRI articulatory corpus for speech research

    No full text
    We present MRI-TIMIT: a large-scale database of synchronized audio and real-time magnetic resonance imaging (rtMRI) data for speech research. The database currently consists of speech data acquired from two male and two female speakers of American English. Subjects' upper airways were imaged in the midsagittal plane while reading the same 460 sentence corpus used in the MOCHA-TIMIT corpus [1]. Accompanying acoustic recordings were phonemically transcribed using forced alignment. Vocal tract tissue boundaries were automatically identified in each video frame, allowing for dynamic quantification of each speaker's midsagittal articulation. The database and companion toolset provide a unique resource with which to examine articulatory-acoustic relationships in speech production.4 page(s

    Statistical Parametric Methods for Articulatory-Based Foreign Accent Conversion

    Get PDF
    Foreign accent conversion seeks to transform utterances from a non-native speaker (L2) to appear as if they had been produced by the same speaker but with a native (L1) accent. Such accent-modified utterances have been suggested to be effective in pronunciation training for adult second language learners. Accent modification involves separating the linguistic gestures and voice-quality cues from the L1 and L2 utterances, then transposing them across the two speakers. However, because of the complex interaction between these two sources of information, their separation in the acoustic domain is not straightforward. As a result, vocoding approaches to accent conversion results in a voice that is different from both the L1 and L2 speakers. In contrast, separation in the articulatory domain is straightforward since linguistic gestures are readily available via articulatory data. However, because of the difficulty in collecting articulatory data, conventional synthesis techniques based on unit selection are ill-suited for accent conversion given the small size of articulatory corpora and the inability to interpolate missing native sounds in L2 corpus. To address these issues, this dissertation presents two statistical parametric methods to accent conversion that operate in the acoustic and articulatory domains, respectively. The acoustic method uses a cross-speaker statistical mapping to generate L2 acoustic features from the trajectories of L1 acoustic features in a reference utterance. Our results show significant reductions in the perceived non-native accents compared to the corresponding L2 utterance. The results also show a strong voice-similarity between accent conversions and the original L2 utterance. Our second (articulatory-based) approach consists of building a statistical parametric articulatory synthesizer for a non-native speaker, then driving the synthesizer with the articulators from the reference L1 speaker. This statistical approach not only has low data requirements but also has the flexibility to interpolate missing sounds in the L2 corpus. In a series of listening tests, articulatory accent conversions were rated more intelligible and less accented than their L2 counterparts. In the final study, we compare the two approaches: acoustic and articulatory. Our results show that the articulatory approach, despite the direct access to the native linguistic gestures, is less effective in reducing perceived non-native accents than the acoustic approach

    Adaptation de clones orofaciaux à la morphologie et aux stratégies de contrôle de locuteurs cibles pour l'articulation de la parole

    Get PDF
    The capacity of producing speech is learned and maintained by means of a perception-action loop that allows speakers to correct their own production as a function of the perceptive feedback received. This auto feedback is auditory and proprioceptive, but not visual. Thus, speech sounds may be complemented by augmented speech systems, i.e. speech accompanied by the virtual display of speech articulators shapes on a computer screen, including those that are typically hidden such as tongue or velum. This kind of system has applications in domains such as speech therapy, phonetic correction or language acquisition in the framework of Computer Aided Pronunciation Training (CAPT). This work has been conducted in the frame of development of a visual articulatory feedback system, based on the morphology and articulatory strategies of a reference speaker, which automatically animates a 3D talking head from the speech sound. The motivation of this research was to make this system suitable for several speakers. Thus, the twofold objective of this thesis work was to acquire knowledge about inter-speaker variability, and to propose vocal tract models to adapt a reference clone, composed of models of speech articulator's contours (lips, tongue, velum, etc), to other speakers that may have different morphologies and different articulatory strategies. In order to build articulatory models of various vocal tract contours, we have first acquired data that cover the whole articulatory space in the French language. Midsagittal Magnetic Resonance Images (MRI) of eleven French speakers, pronouncing 63 articulations, have been collected. One of the main contributions of this study is a more detailed and larger database compared to the studies in the literature, containing information of several vocal tract contours, speakers and consonants, whereas previous studies in the literature are mostly based on vowels. The vocal tract contours visible in the MRI were outlined by hand following the same protocol for all speakers. In order to acquire knowledge about inter-speaker variability, we have characterised our speakers in terms of the articulatory strategies of various vocal tract contours like: tongue, lips and velum. We observed that each speaker has his/her own strategy to achieve sounds that are considered equivalent, among different speakers, for speech communication purposes. By means of principal component analysis (PCA), the variability of the tongue, lips and velum contours was decomposed in a set of principal movements. We noticed that these movements are performed in different proportions depending on the speaker. For instance, for a given displacement of the jaw, the tongue may globally move in a proportion that depends on the speaker. We also noticed that lip protrusion, lip opening, the influence of the jaw movement on the lips, and the velum's articulatory strategy can also vary according to the speaker. For example, some speakers roll up their uvulas against the tongue to produce the consonant /ʁ/ in vocalic contexts. These findings also constitute an important contribution to the knowledge of inter-speaker variability in speech production. In order to extract a set of common articulatory patterns that different speakers employ when producing speech sounds (normalisation), we have based our approach on linear models built from articulatory data. Multilinear decomposition methods have been applied to the contours of the tongue, lips and velum. The evaluation of our models was based in two criteria: the variance explanation and the Root Mean Square Error (RMSE) between the original and recovered articulatory coordinates. Models were also assessed using a leave-one-out cross validation procedure ...La capacité de production de la parole est apprise et maintenue au moyen d'une boucle de perception-action qui permet aux locuteurs de corriger leur propre production en fonction du retour perceptif reçu. Ce retour est auditif et proprioceptif, mais pas visuel. Ainsi, les sons de parole peuvent être complétés par l'affichage des articulateurs sur l'écran de l'ordinateur, y compris ceux qui sont habituellement cachés tels que la langue ou le voile du palais, ce qui constitue de la parole augmentée. Ce type de système a des applications dans des domaines tels que l'orthophonie, la correction phonétique et l'acquisition du langage. Ce travail a été mené dans le cadre du développement d'un système de retour articulatoire visuel, basé sur la morphologie et les stratégies articulatoires d'un locuteur de référence, qui anime automatiquement une tête parlante 3D à partir du son de la parole. La motivation de cette recherche était d'adapter ce système à plusieurs locuteurs. Ainsi, le double objectif de cette thèse était d'acquérir des connaissances sur la variabilité inter-locuteur, et de proposer des modèles pour adapter un clone de référence, composé de modèles des articulateurs de la parole (lèvres, langue, voile du palais, etc.), à d'autres locuteurs qui peuvent avoir des morphologies et des stratégies articulatoires différentes. Afin de construire des modèles articulatoires pour différents contours du conduit vocal, nous avons d'abord acquis des données qui couvrent l'espace articulatoire dans la langue française. Des Images médio-sagittales obtenues par Résonance Magnétique (IRM) pour onze locuteurs francophones prononçant 63 articulations ont été recueillis. L'un des principaux apports de cette étude est une base de données plus détaillée et plus grande que celles disponibles dans la littérature. Cette base contient, pour plusieurs locuteurs, les tracés de tous les articulateurs du conduit vocal, pour les voyelles et les consonnes, alors que les études précédentes dans la littérature sont principalement basées sur les voyelles. Les contours du conduit vocal visibles dans l'IRM ont été tracés à la main en suivant le même protocole pour tous les locuteurs. Afin d'acquérir de la connaissance sur la variabilité inter-locuteur, nous avons caractérisé nos locuteurs en termes des stratégies articulatoires des différents articulateurs tels que la langue, les lèvres et le voile du palais. Nous avons constaté que chaque locuteur a sa propre stratégie pour produire des sons qui sont considérées comme équivalents du point de vue de la communication parlée. La variabilité de la langue, des lèvres et du voile du palais a été décomposé en une série de mouvements principaux par moyen d'une analyse en composantes principales (ACP). Nous avons remarqué que ces mouvements sont effectués dans des proportions différentes en fonction du locuteur. Par exemple, pour un déplacement donné de la mâchoire, la langue peut globalement se déplacer dans une proportion qui dépend du locuteur. Nous avons également remarqué que la protrusion, l'ouverture des lèvres, l'influence du mouvement de la mâchoire sur les lèvres, et la stratégie articulatoire du voile du palais peuvent également varier en fonction du locuteur. Par exemple, certains locuteurs replient le voile du palais contre la langue pour produire la consonne /ʁ/. Ces résultats constituent également une contribution importante à la connaissance de la variabilité inter-locuteur dans la production de la parole. Afin d'extraire un ensemble de patrons articulatoires communs à différents locuteurs dans la production de la parole (normalisation), nous avons basé notre approche sur des modèles linéaires construits à partir de données articulatoires. Des méthodes de décomposition linéaire multiple ont été appliquées aux contours de la langue, des lèvres et du voile du palais ..

    A multimodal real-time MRI articulatory corpus for speech research,” in Interspeech,

    No full text
    Abstract We present MRI-TIMIT: a large-scale database of synchronized audio and real-time magnetic resonance imaging (rtMRI) data for speech research. The database currently consists of speech data acquired from two male and two female speakers of American English. Subjects' upper airways were imaged in the midsagittal plane while reading the same 460 sentence corpus used in the MOCHA-TIMIT corpu

    Interfaces de fala silenciosa multimodais para português europeu com base na articulação

    Get PDF
    Doutoramento conjunto MAPi em InformáticaThe concept of silent speech, when applied to Human-Computer Interaction (HCI), describes a system which allows for speech communication in the absence of an acoustic signal. By analyzing data gathered during different parts of the human speech production process, Silent Speech Interfaces (SSI) allow users with speech impairments to communicate with a system. SSI can also be used in the presence of environmental noise, and in situations in which privacy, confidentiality, or non-disturbance are important. Nonetheless, despite recent advances, performance and usability of Silent Speech systems still have much room for improvement. A better performance of such systems would enable their application in relevant areas, such as Ambient Assisted Living. Therefore, it is necessary to extend our understanding of the capabilities and limitations of silent speech modalities and to enhance their joint exploration. Thus, in this thesis, we have established several goals: (1) SSI language expansion to support European Portuguese; (2) overcome identified limitations of current SSI techniques to detect EP nasality (3) develop a Multimodal HCI approach for SSI based on non-invasive modalities; and (4) explore more direct measures in the Multimodal SSI for EP acquired from more invasive/obtrusive modalities, to be used as ground truth in articulation processes, enhancing our comprehension of other modalities. In order to achieve these goals and to support our research in this area, we have created a multimodal SSI framework that fosters leveraging modalities and combining information, supporting research in multimodal SSI. The proposed framework goes beyond the data acquisition process itself, including methods for online and offline synchronization, multimodal data processing, feature extraction, feature selection, analysis, classification and prototyping. Examples of applicability are provided for each stage of the framework. These include articulatory studies for HCI, the development of a multimodal SSI based on less invasive modalities and the use of ground truth information coming from more invasive/obtrusive modalities to overcome the limitations of other modalities. In the work here presented, we also apply existing methods in the area of SSI to EP for the first time, noting that nasal sounds may cause an inferior performance in some modalities. In this context, we propose a non-invasive solution for the detection of nasality based on a single Surface Electromyography sensor, conceivable of being included in a multimodal SSI.O conceito de fala silenciosa, quando aplicado a interação humano-computador, permite a comunicação na ausência de um sinal acústico. Através da análise de dados, recolhidos no processo de produção de fala humana, uma interface de fala silenciosa (referida como SSI, do inglês Silent Speech Interface) permite a utilizadores com deficiências ao nível da fala comunicar com um sistema. As SSI podem também ser usadas na presença de ruído ambiente, e em situações em que privacidade, confidencialidade, ou não perturbar, é importante. Contudo, apesar da evolução verificada recentemente, o desempenho e usabilidade de sistemas de fala silenciosa tem ainda uma grande margem de progressão. O aumento de desempenho destes sistemas possibilitaria assim a sua aplicação a áreas como Ambientes Assistidos. É desta forma fundamental alargar o nosso conhecimento sobre as capacidades e limitações das modalidades utilizadas para fala silenciosa e fomentar a sua exploração conjunta. Assim, foram estabelecidos vários objetivos para esta tese: (1) Expansão das linguagens suportadas por SSI com o Português Europeu; (2) Superar as limitações de técnicas de SSI atuais na deteção de nasalidade; (3) Desenvolver uma abordagem SSI multimodal para interação humano-computador, com base em modalidades não invasivas; (4) Explorar o uso de medidas diretas e complementares, adquiridas através de modalidades mais invasivas/intrusivas em configurações multimodais, que fornecem informação exata da articulação e permitem aumentar a nosso entendimento de outras modalidades. Para atingir os objetivos supramencionados e suportar a investigação nesta área procedeu-se à criação de uma plataforma SSI multimodal que potencia os meios para a exploração conjunta de modalidades. A plataforma proposta vai muito para além da simples aquisição de dados, incluindo também métodos para sincronização de modalidades, processamento de dados multimodais, extração e seleção de características, análise, classificação e prototipagem. Exemplos de aplicação para cada fase da plataforma incluem: estudos articulatórios para interação humano-computador, desenvolvimento de uma SSI multimodal com base em modalidades não invasivas, e o uso de informação exata com origem em modalidades invasivas/intrusivas para superar limitações de outras modalidades. No trabalho apresentado aplica-se ainda, pela primeira vez, métodos retirados do estado da arte ao Português Europeu, verificando-se que sons nasais podem causar um desempenho inferior de um sistema de fala silenciosa. Neste contexto, é proposta uma solução para a deteção de vogais nasais baseada num único sensor de eletromiografia, passível de ser integrada numa interface de fala silenciosa multimodal

    Korpuslinguistik

    Get PDF
    The study examines the concept of patterned language usage in scientific texts and, based on a data-led corpus analysis, describes scientific style at a formal and pragmatic level. With theoretical grounding in multiple linguistic sub-disciplines, the book makes a major contribution to text-type research, the discussion of standards, and to the study of writing
    corecore