430 research outputs found

    From Fuzzy Expert System to Artificial Neural Network: Application to Assisted Speech Therapy

    Get PDF
    This chapter addresses the following question: What are the advantages of extending a fuzzy expert system (FES) to an artificial neural network (ANN), within a computer‐based speech therapy system (CBST)? We briefly describe the key concepts underlying the principles behind the FES and ANN and their applications in assisted speech therapy. We explain the importance of an intelligent system in order to design an appropriate model for real‐life situations. We present data from 1‐year application of these concepts in the field of assisted speech therapy. Using an artificial intelligent system for improving speech would allow designing a training program for pronunciation, which can be individualized based on specialty needs, previous experiences, and the child\u27s prior therapeutical progress. Neural networks add a great plus value when dealing with data that do not normally match our previous designed pattern. Using an integrated approach that combines FES and ANN allows our system to accomplish three main objectives: (1) develop a personalized therapy program; (2) gradually replace some human expert duties; (3) use “self‐learning” capabilities, a component traditionally reserved for humans. The results demonstrate the viability of the hybrid approach in the context of speech therapy that can be extended when designing similar applications

    How Japanese Learners Learn to Produce Authentic English Vowels

    Get PDF

    Computational and Numerical Simulations

    Get PDF
    Computational and Numerical Simulations is an edited book including 20 chapters. Book handles the recent research devoted to numerical simulations of physical and engineering systems. It presents both new theories and their applications, showing bridge between theoretical investigations and possibility to apply them by engineers of different branches of science. Numerical simulations play a key role in both theoretical and application oriented research

    Rhythmic unit extraction and modelling for automatic language identification

    Get PDF
    International audienceThis paper deals with an approach to Automatic Language Identification based on rhythmic modelling. Beside phonetics and phonotactics, rhythm is actually one of the most promising features to be considered for language identification, even if its extraction and modelling are not a straightforward issue. Actually, one of the main problems to address is what to model. In this paper, an algorithm of rhythm extraction is described: using a vowel detection algorithm, rhythmic units related to syllables are segmented. Several parameters are extracted (consonantal and vowel duration, cluster complexity) and modelled with a Gaussian Mixture. Experiments are performed on read speech for 7 languages (English, French, German, Italian, Japanese, Mandarin and Spanish) and results reach up to 86 ± 6% of correct discrimination between stress-timed mora-timed and syllable-timed classes of languages, and to 67 ± 8% percent of correct language identification on average for the 7 languages with utterances of 21 seconds. These results are commented and compared with those obtained with a standard acoustic Gaussian mixture modelling approach (88 ± 5% of correct identification for the 7-languages identification task)

    Modelo acĂșstico de lĂ­ngua inglesa falada por portugueses

    Get PDF
    Trabalho de projecto de mestrado em Engenharia InformĂĄtica, apresentado Ă  Universidade de Lisboa, atravĂ©s da Faculdade de CiĂȘncias, 2007No contexto do reconhecimento robusto de fala baseado em modelos de Markov nĂŁo observĂĄveis (do inglĂȘs Hidden Markov Models - HMMs) este trabalho descreve algumas metodologias e experiĂȘncias tendo em vista o reconhecimento de oradores estrangeiros. Quando falamos em Reconhecimento de Fala falamos obrigatoriamente em Modelos AcĂșsticos tambĂ©m. Os modelos acĂșsticos reflectem a maneira como pronunciamos/articulamos uma lĂ­ngua, modelando a sequĂȘncia de sons emitidos aquando da fala. Essa modelação assenta em segmentos de fala mĂ­nimos, os fones, para os quais existe um conjunto de sĂ­mbolos/alfabetos que representam a sua pronunciação. É no campo da fonĂ©tica articulatĂłria e acĂșstica que se estuda a representação desses sĂ­mbolos, sua articulação e pronunciação. Conseguimos descrever palavras analisando as unidades que as constituem, os fones. Um reconhecedor de fala interpreta o sinal de entrada, a fala, como uma sequĂȘncia de sĂ­mbolos codificados. Para isso, o sinal Ă© fragmentado em observaçÔes de sensivelmente 10 milissegundos cada, reduzindo assim o factor de anĂĄlise ao intervalo de tempo onde as caracterĂ­sticas de um segmento de som nĂŁo variam. Os modelos acĂșsticos dĂŁo-nos uma noção sobre a probabilidade de uma determinada observação corresponder a uma determinada entidade. É, portanto, atravĂ©s de modelos sobre as entidades do vocabulĂĄrio a reconhecer que Ă© possĂ­vel voltar a juntar esses fragmentos de som. Os modelos desenvolvidos neste trabalho sĂŁo baseados em HMMs. Chamam-se assim por se fundamentarem nas cadeias de Markov (1856 - 1922): sequĂȘncias de estados onde cada estado Ă© condicionado pelo seu anterior. Localizando esta abordagem no nosso domĂ­nio, hĂĄ que construir um conjunto de modelos - um para cada classe de sons a reconhecer - que serĂŁo treinados por dados de treino. Os dados sĂŁo ficheiros ĂĄudio e respectivas transcriçÔes (ao nĂ­vel da palavra) de modo a que seja possĂ­vel decompor essa transcrição em fones e alinhĂĄ-la a cada som do ficheiro ĂĄudio correspondente. Usando um modelo de estados, onde cada estado representa uma observação ou segmento de fala descrita, os dados vĂŁo-se reagrupando de maneira a criar modelos estatĂ­sticos, cada vez mais fidedignos, que consistam em representaçÔes das entidades da fala de uma determinada lĂ­ngua. O reconhecimento por parte de oradores estrangeiros com pronuncias diferentes da lĂ­ngua para qual o reconhecedor foi concebido, pode ser um grande problema para precisĂŁo de um reconhecedor. Esta variação pode ser ainda mais problemĂĄtica que a variação dialectal de uma determinada lĂ­ngua, isto porque depende do conhecimento que cada orador tĂȘm relativamente Ă  lĂ­ngua estrangeira. Usando para uma pequena quantidade ĂĄudio de oradores estrangeiros para o treino de novos modelos acĂșsticos, foram efectuadas diversas experiĂȘncias usando corpora de Portugueses a falar InglĂȘs, de PortuguĂȘs Europeu e de InglĂȘs. Inicialmente foi explorado o comportamento, separadamente, dos modelos de Ingleses nativos e Portugueses nativos, quando testados com os corpora de teste (teste com nativos e teste com nĂŁo nativos). De seguida foi treinado um outro modelo usando em simultĂąneo como corpus de treino, o ĂĄudio de Portugueses a falar InglĂȘs e o de Ingleses nativos. Uma outra experiĂȘncia levada a cabo teve em conta o uso de tĂ©cnicas de adaptação, tal como a tĂ©cnica MLLR, do inglĂȘs Maximum Likelihood Linear Regression. Esta Ășltima permite a adaptação de uma determinada caracterĂ­stica do orador, neste caso o sotaque estrangeiro, a um determinado modelo inicial. Com uma pequena quantidade de dados representando a caracterĂ­stica que se quer modelar, esta tĂ©cnica calcula um conjunto de transformaçÔes que serĂŁo aplicadas ao modelo que se quer adaptar. Foi tambĂ©m explorado o campo da modelação fonĂ©tica onde estudou-se como Ă© que o orador estrangeiro pronuncia a lĂ­ngua estrangeira, neste caso um PortuguĂȘs a falar InglĂȘs. Este estudo foi feito com a ajuda de um linguista, o qual definiu um conjunto de fones, resultado do mapeamento do inventĂĄrio de fones do InglĂȘs para o PortuguĂȘs, que representam o InglĂȘs falado por Portugueses de um determinado grupo de prestĂ­gio. Dada a grande variabilidade de pronĂșncias teve de se definir este grupo tendo em conta o nĂ­vel de literacia dos oradores. Este estudo foi posteriormente usado na criação de um novo modelo treinado com os corpora de Portugueses a falar InglĂȘs e de Portugueses nativos. Desta forma representamos um reconhecedor de PortuguĂȘs nativo onde o reconhecimento de termos ingleses Ă© possĂ­vel. Tendo em conta a temĂĄtica do reconhecimento de fala este projecto focou tambĂ©m a recolha de corpora para portuguĂȘs europeu e a compilação de um lĂ©xico de PortuguĂȘs europeu. Na ĂĄrea de aquisição de corpora o autor esteve envolvido na extracção e preparação dos dados de fala telefĂłnica, para posterior treino de novos modelos acĂșsticos de portuguĂȘs europeu. Para compilação do lĂ©xico de portuguĂȘs europeu usou-se um mĂ©todo incremental semi-automĂĄtico. Este mĂ©todo consistiu em gerar automaticamente a pronunciação de grupos de 10 mil palavras, sendo cada grupo revisto e corrigido por um linguista. Cada grupo de palavras revistas era posteriormente usado para melhorar as regras de geração automĂĄtica de pronunciaçÔes.The tremendous growth of technology has increased the need of integration of spoken language technologies into our daily applications, providing an easy and natural access to information. These applications are of different nature with different user’s interfaces. Besides voice enabled Internet portals or tourist information systems, automatic speech recognition systems can be used in home user’s experiences where TV and other appliances could be voice controlled, discarding keyboards or mouse interfaces, or in mobile phones and palm-sized computers for a hands-free and eyes-free manipulation. The development of these systems causes several known difficulties. One of them concerns the recognizer accuracy on dealing with non-native speakers with different phonetic pronunciations of a given language. The non-native accent can be more problematic than a dialect variation on the language. This mismatch depends on the individual speaking proficiency and speaker’s mother tongue. Consequently, when the speaker’s native language is not the same as the one that was used to train the recognizer, there is a considerable loss in recognition performance. In this thesis, we examine the problem of non-native speech in a speaker-independent and large-vocabulary recognizer in which a small amount of non-native data was used for training. Several experiments were performed using Hidden Markov models, trained with speech corpora containing European Portuguese native speakers, English native speakers and English spoken by European Portuguese native speakers. Initially it was explored the behaviour of an English native model and non-native English speakers’ model. Then using different corpus weights for the English native speakers and English spoken by Portuguese speakers it was trained a model as a pool of accents. Through adaptation techniques it was used the Maximum Likelihood Linear Regression method. It was also explored how European Portuguese speakers pronounce English language studying the correspondences between the phone sets of the foreign and target languages. The result was a new phone set, consequence of the mapping between the English and the Portuguese phone sets. Then a new model was trained with English Spoken by Portuguese speakers’ data and Portuguese native data. Concerning the speech recognition subject this work has other two purposes: collecting Portuguese corpora and supporting the compilation of a Portuguese lexicon, adopting some methods and algorithms to generate automatic phonetic pronunciations. The collected corpora was processed in order to train acoustic models to be used in the Exchange 2007 domain, namely in Outlook Voice Access

    Impact of dialect use on a basic component of learning to read

    Get PDF
    Can some black-white differences in reading achievement be traced to differences in language background? Many African American children speak a dialect that differs from the mainstream dialect emphasized in school. We examined how use of alternative dialects affects decoding, an important component of early reading and marker of reading development. Behavioral data show that use of the alternative pronunciations of words in different dialects affects reading aloud in developing readers, with larger effects for children who use more African American English. Mechanisms underlying this effect were explored with a computational model, investigating factors affecting reading acquisition. The results indicate that the achievement gap may be due in part to differences in task complexity: children whose home and school dialects differ are at greater risk for reading difficulties because tasks such as learning to decode are more complex for them

    DualTalker: A Cross-Modal Dual Learning Approach for Speech-Driven 3D Facial Animation

    Full text link
    In recent years, audio-driven 3D facial animation has gained significant attention, particularly in applications such as virtual reality, gaming, and video conferencing. However, accurately modeling the intricate and subtle dynamics of facial expressions remains a challenge. Most existing studies approach the facial animation task as a single regression problem, which often fail to capture the intrinsic inter-modal relationship between speech signals and 3D facial animation and overlook their inherent consistency. Moreover, due to the limited availability of 3D-audio-visual datasets, approaches learning with small-size samples have poor generalizability that decreases the performance. To address these issues, in this study, we propose a cross-modal dual-learning framework, termed DualTalker, aiming at improving data usage efficiency as well as relating cross-modal dependencies. The framework is trained jointly with the primary task (audio-driven facial animation) and its dual task (lip reading) and shares common audio/motion encoder components. Our joint training framework facilitates more efficient data usage by leveraging information from both tasks and explicitly capitalizing on the complementary relationship between facial motion and audio to improve performance. Furthermore, we introduce an auxiliary cross-modal consistency loss to mitigate the potential over-smoothing underlying the cross-modal complementary representations, enhancing the mapping of subtle facial expression dynamics. Through extensive experiments and a perceptual user study conducted on the VOCA and BIWI datasets, we demonstrate that our approach outperforms current state-of-the-art methods both qualitatively and quantitatively. We have made our code and video demonstrations available at https://github.com/sabrina-su/iadf.git

    The Impact of AI on Teaching and Learning in Higher Education Technology

    Get PDF
    Thanks to AI, students may now study whenever and wherever they like. Personalized feedback on assignments, quizzes, and other assessments can be generated using AI algorithms and utilised as a teaching tool to help students succeed. This study examined the impact of artificial intelligence in higher education teaching and learning. This study focuses on the impact of new technologies on student learning and educational institutions. With the rapid adoption of new technologies in higher education, as well as recent technological advancements, it is possible to forecast the future of higher education in a world where artificial intelligence is ubiquitous. Administration, student support, teaching, and learning can all benefit from the use of these technologies; we identify some challenges that higher education institutions and students may face, and we consider potential research directions

    Integrating Language Identification to improve Multilingual Speech Recognition

    Get PDF
    The process of determining the language of a speech utterance is called Language Identification (LID). This task can be very challenging as it has to take into account various language-specific aspects, such as phonetic, phonotactic, vocabulary and grammar-related cues. In multilingual speech recognition we try to find the most likely word sequence that corresponds to an utterance where the language is not known a priori. This is a considerably harder task compared to monolingual speech recognition and it is common to use LID to estimate the current language. In this project we present two general approaches for LID and describe how to integrate them into multilingual speech recognizers. The first approach uses hierarchical multilayer perceptrons to estimate language posterior probabilities given the acoustics in combination with hidden Markov models. The second approach evaluates the output of a multilingual speech recognizer to determine the spoken language. The research is applied to the MediaParl speech corpus that was recorded at the Parliament of the canton of Valais, where people switch from Swiss French to Swiss German or vice versa. Our experiments show that, on that particular data set, LID can be used to significantly improve the performance of multilingual speech recognizers. We will also point out that ASR dependent LID approaches yield the best performance due to higher-level cues and that our systems perform much worse on non-native dat

    Design of reservoir computing systems for the recognition of noise corrupted speech and handwriting

    Get PDF
    • 

    corecore