11 research outputs found

    Nagyszótáras híranyagok felismerési pontosságának növelése morfémaalapú, folyamatos beszédfelismerővel

    Get PDF
    Morfémaalapú beszédfelismernek tekintjük azokat a felismerket, melyek szónál kisebb, morfémaszer elemekre épül nyelvi modellt használnak. Kísérleteink során öt különböz szegmentáló eljárással készített morfémaalapú felismer teljesítményét hasonlítottuk egy standard, szó alapú rendszeréhez tervezett beszéd, híranyag felolvasásos feladaton. Megállapítottuk, hogy mind statisztikai, mind szabályalapú szegmentáló algoritmust használva, morféma alapon jelents mértékben növelni lehet a felismerési pontosságot. Különösen alacsony hibaarányt értünk el egy hibrid eljárással, mely a statisztikai módszert nyelvspecifikus tudással egészíti ki. Felügyelet nélküli beszéladaptációs technológiával kiegészítve, ily módon sikerült 20% alá csökkentenünk a szóhiba-arányt, mely tudomásuk szerint a legalacsonyabb eddig publikált eredmény magyar nyelv, nagyszótáras, folyamatos beszédfelismerés területén

    Lexikai modellezés a közlés tervezettségének függvényében magyar nyelvű beszédfelismerésnél

    Get PDF
    A morfémákban gazdag nyelvek nagyszótáras, gépi beszédfelismerésénél gyakran használnak szónál kisebb elemekre, ún. morfokra épülő nyelvi modelleket. Ezek alkalmazása azonban többletmunkát, magasabb rendszerkomplexitást igényel, ugyanakkor a javulás mértéke változó. Cikkünkben a morfalapú nyelvi modellezéssel elérhető hibacsökkenés előrejelzésére teszünk kísérletet. Ehhez először azonosítjuk a hibacsökkenést befolyásoló tényezőket, majd kísérleti úton megvizsgáljuk pontos hatásukat. Eredményeink alapján elmondható, hogy a morfalapú modellek alkalmazása kisméretű tanítószövegek, illetve korlátozott szótárméret mellett járhat jelentős előnnyel. Előnyös még a kevésbé spontán, tervezettebb beszédet tartalmazó adatbázisok esetén, míg a jel-zaj viszony romlása csökkenti a hibacsökkenés mértékét, csakúgy, mint az abszolút hibát. Az utolsó fejezetben bemutatunk egy mérőszámot, mely erős összefüggést mutat a kísérleti adatbázisainkon mérhető morfalapú hibacsökkenéssel. Ez a mérőszám nem csak a feladat tervezettségét, hanem a tanítószöveg mennyiségét is figyelembe veszi

    Script Independent Morphological Segmentation for Arabic Maghrebi Dialects: An Application to Machine Translation

    Get PDF
    International audienceThis research deals with resources creation for under-resourced languages. We try to adapt existing resources for other resourced-languages to process less-resourced ones. We focus on Arabic dialects of the Maghreb, namely Algerian, Moroccan and Tunisian. We first adapt a well-known statistical word segmenter to segment Algerian dialect texts written in both Arabic and Latin scripts. We demonstrate that unsupervised morphological segmentation could be applied to Arabic dialects regardless of used script. Next, we use this kind of segmentation to improve statistical machine translation scores between the tree Maghrebi dialects and French. We use a parallel multidialectal corpus that includes six Arabic dialects in addition to MSA and French. We achieved interesting results. Regards to word segmentation, the rate of correctly segmented words reached 70% for those written in Latin script and 79% for those written in Arabic script. For machine translation, the unsupervised morphological segmentation helped to decrease out-of-vocabulary words rates by a minimum of 35%

    Sessizliğin Kaldırılması ve Konuşmanın Parçalara Ayrılması İşleminin Türkçe Otomatik Konuşma Tanıma Üzerindeki Etkisi

    Get PDF
    Otomatik Konuşma Tanıma sistemleri temel olarak akustik bilgiden faydalanılarak geliştirilmektedir. Akustik bilgiden fonem bilgisinin elde edilmesi için eşleştirilmiş konuşma ve metin verileri kullanılmaktadır. Bu veriler ile eğitilen akustik modeller gerçek hayattaki bütün akustik bilgiyi modelleyememektedir. Bu nedenle belirli ön işlemlerin yapılması ve otomatik konuşma tanıma sistemlerinin başarımını düşürecek akustik bilgilerin ortadan kaldırılması gerekmektedir. Bu çalışmada konuşma içerisinde geçen sessizliklerin kaldırılması için bir yöntem önerilmiştir. Önerilen yöntemin amacı sessizlik bilgisinin ortadan kaldırılması ve akustik bilgide uzun bağımlılıklar sağlayan konuşmaların parçalara ayrılmasıdır. Geliştirilen yöntemin sonunda elde edilen sessizlik içermeyen ve parçalara ayrılan konuşma bilgisi bir Türkçe Otomatik Konuşma Tanıma sistemine girdi olarak verilmiştir. Otomatik Konuşma Tanıma sisteminin çıkışında sisteme giriş olarak verilen konuşma parçalarına karşılık gelen metinler birleştirilerek sunulmuştur. Gerçekleştirilen deneylerde sessizliğin kaldırılması ve konuşmanın parçalara ayrılması işleminin Otomatik Konuşma Tanıma sistemlerinin başarımını artırdığı görülmüştür

    Large vocabulary recognition for online Turkish handwriting with sublexical units

    Get PDF
    We present a system for large vocabulary recognition of online Turkish handwriting, using hidden Markov models. While using a traditional approach for the recognizer, we have identified and developed solutions for the main problems specific to Turkish handwriting recognition. First, since large amounts of Turkish handwriting samples are not available, the system is trained and optimized using the large UNIPEN dataset of English handwriting, before extending it to Turkish using a small Turkish dataset. The delayed strokes, which pose a significant source of variation in writing order due to the large number of diacritical marks in Turkish, are removed during preprocessing. Finally, as a solution to the high out-of-vocabulary rates encountered when using a fixed size lexicon in general purpose recognition, a lexicon is constructed from sublexical units (stems and endings) learned from a large Turkish corpus. A statistical bigram language model learned from the same corpus is also applied during the decoding process. The system obtains a 91.7% word recognition rate when tested on a small Turkish handwritten word dataset using a medium sized (1950 words) lexicon corresponding to the vocabulary of the test set and 63.8% using a large, general purpose lexicon (130,000 words). However, with the proposed stem+ending lexicon (12,500 words) and bigram language model with lattice expansion, a 67.9% word recognition accuracy is obtained, surpassing the results obtained with the general purpose lexicon while using a much smaller one

    Spoken content retrieval: A survey of techniques and technologies

    Get PDF
    Speech media, that is, digital audio and video containing spoken content, has blossomed in recent years. Large collections are accruing on the Internet as well as in private and enterprise settings. This growth has motivated extensive research on techniques and technologies that facilitate reliable indexing and retrieval. Spoken content retrieval (SCR) requires the combination of audio and speech processing technologies with methods from information retrieval (IR). SCR research initially investigated planned speech structured in document-like units, but has subsequently shifted focus to more informal spoken content produced spontaneously, outside of the studio and in conversational settings. This survey provides an overview of the field of SCR encompassing component technologies, the relationship of SCR to text IR and automatic speech recognition and user interaction issues. It is aimed at researchers with backgrounds in speech technology or IR who are seeking deeper insight on how these fields are integrated to support research and development, thus addressing the core challenges of SCR

    Extracting and using prosodic information for Turkish spoken language processing

    Get PDF
    Bu projede genel olarak, konuşulan dili (Türkçe) anlamada, konuşulan dilin bürünsel/ezgisel (prosodic) ve sözcüksel (lexical) özelliklerinin ortaya çıkarılması ve bu özelliklerin konuşulan dilin bilgisayarla otomatik olarak işlenmesinde kullanılması amaçlanmaktadır. Bu daha özel olarak, otomatik konuşma tanıyıcısının (ASR) çıkışına ilişkin cümle bölütleme işlevini içermektedir. Otomatik konuşma tanıma sistemlerinden çıkan yazılı metnin özellikle noktalama (punctuation), büyük küçük harf farklılıkları ve vurgu, tonlama, perde, durak gibi konuşmaya ilişkin temel bazı parametrelerden yoksun olması veya bu özellikleri kaybetmiş olması, özellikle anlamda farklılıklara yol açmaktadır. Bu çıktının zenginleştirilmesi (enrichment) başka bir deyiş ile bu özelliklerin tekrar geriye kazandırılması, bu metinlerin hem insanlar tarafından okunmasını ve doğru algılanmasını hem de makineler tarafından işlenmesini kolaylaştıracaktır. Bu projedeki amaç, bu zenginleştirme ve geri kazandırım işleminin dilin bürünsel özelliklerinden yararlanarak yapılmasıdır.The text which the output of the Automatic Speech Recognition (ASR) system lacks especially punctuation, differences in the capitalization and the parameters related to the speaking such as stress, tone, pitch, pause cause some differences in the meaning. Enrichment of this output or another words to gain this features back to the output will provide either reading and understanding of the humans or processing of the machines easily. The aim of this project is doing this enrichment and the process of gaining back by using the prosodic features of the spoken language. In this proposal, we would like to examine the extraction and use of prosodic information in addition to lexical features for spoken language processing of Turkish. Specifically, we would like to research the use of prosodic features for sentence segmentation of Turkish speech. Another outcome of the project is to obtain a database of prosodic features at the word and morpheme level, which can be used for other purposes such as morphological disambiguation or word sense disambiguation. Turkish is an agglutinative language. Thus, the text should be analyzed morphologically in order to determine the root forms and the suffixes of the words before further analysis. In the framework of this project, we also would like to examine the interaction of prosodic features with morphological information. The role of sentence segmentation is to detect sentence boundaries in the stream of words provided by the ASR module for further downstream processing. This is helpful for various language processing tasks, such as parsing, machine translation and question answering. We formulate sentence segmentation as a binary classification task. For each position between two consecutive words the system must decide if the position marks a boundary between two sentences or if the two neighboring words belong to the same sentence. The sentence segmentation process is established by combining the Hidden Event Language Models (HELMs) with discriminative classification methods. The HELM takes into account the sequence of words and the output discriminative classification methods such as decision tree that is based on prosodic features such as pause durations. The new approach combines the HELMs for exploiting lexical information, with maximum entropy and boosting classifiers that tightly integrate lexical, as well as prosodic, speaker change and syntactic features. The boostingbased classifier alone performs better than all the other classification schemes. When combined with a hidden event language model the improvement is even more pronounced.Publisher's Versio

    X. Magyar Számítógépes Nyelvészeti Konferencia

    Get PDF
    corecore