14 research outputs found

    Impact du degre de supervision sur l'adaptation a un domaine d'un modele de langage a partir du Web

    Get PDF
    Domain adaptation of a language model aims at re-estimating word sequence probabilities in order to better match the peculiarities of a given broad topic of interest. To achieve this task, a common strategy consists in retrieving adaptation texts from the Internet based on a given domain-representative seed text. In this paper, we study the influence of the choice of this seed text on the adaptation process and on the performances of adapted language models in automatic speech recognition. More precisely, the goal of this original study is to analyze the differences between supervised adaptation, in which the seed text is manually generated, and unsupervised adaptation, where the seed text is an automatic transcript. Experiments carried out on videos from a real-world use case mainly show that differences vary according to adaptation scenarios and that the unsupervised approach is globally convincing, especially according to its low cost

    Language modeling for speech recognition of spoken Cantonese.

    Get PDF
    Yeung, Yu Ting.Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.Includes bibliographical references (leaves 84-93).Abstracts in English and Chinese.Acknowledgement --- p.iiiAbstract --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Cantonese Speech Recognition --- p.3Chapter 1.2 --- Objectives --- p.4Chapter 1.3 --- Thesis Outline --- p.5Chapter 2 --- Fundamentals of Large Vocabulary Continuous Speech Recognition --- p.7Chapter 2.1 --- Problem Formulation --- p.7Chapter 2.2 --- Feature Extraction --- p.8Chapter 2.3 --- Acoustic Models --- p.9Chapter 2.4 --- Decoding --- p.10Chapter 2.5 --- Statistical Language Modeling --- p.12Chapter 2.5.1 --- N-gram Language Models --- p.12Chapter 2.5.2 --- N-gram Smoothing --- p.13Chapter 2.5.3 --- Complexity of Language Model --- p.15Chapter 2.5.4 --- Class-based Langauge Model --- p.16Chapter 2.5.5 --- Language Model Pruning --- p.17Chapter 2.6 --- Performance Evaluation --- p.18Chapter 3 --- The Cantonese Dialect --- p.19Chapter 3.1 --- Phonology of Cantonese --- p.19Chapter 3.2 --- Orthographic Representation of Cantonese --- p.22Chapter 3.3 --- Classification of Cantonese speech --- p.25Chapter 3.4 --- Cantonese-English Code-mixing --- p.27Chapter 4 --- Rule-based Translation Method --- p.29Chapter 4.1 --- Motivations --- p.29Chapter 4.2 --- Transformation-based Learning --- p.30Chapter 4.2.1 --- Algorithm Overview --- p.30Chapter 4.2.2 --- Learning of Translation Rules --- p.32Chapter 4.3 --- Performance Evaluation --- p.35Chapter 4.3.1 --- The Learnt Translation Rules --- p.35Chapter 4.3.2 --- Evaluation of the Rules --- p.37Chapter 4.3.3 --- Analysis of the Rules --- p.37Chapter 4.4 --- Preparation of Training Data for Language Modeling --- p.41Chapter 4.5 --- Discussion --- p.43Chapter 5 --- Language Modeling for Cantonese --- p.44Chapter 5.1 --- Training Data --- p.44Chapter 5.1.1 --- Text Corpora --- p.44Chapter 5.1.2 --- Preparation of Formal Cantonese Text Data --- p.45Chapter 5.2 --- Training of Language Models --- p.46Chapter 5.2.1 --- Language Models for Standard Chinese --- p.46Chapter 5.2.2 --- Language Models for Formal Cantonese --- p.46Chapter 5.2.3 --- Language models for Colloquial Cantonese --- p.47Chapter 5.3 --- Evaluation of Language Models --- p.48Chapter 5.3.1 --- Speech Corpora for Evaluation --- p.48Chapter 5.3.2 --- Perplexities of Formal Cantonese Language Models --- p.49Chapter 5.3.3 --- Perplexities of Colloquial Cantonese Language Models --- p.51Chapter 5.4 --- Speech Recognition Experiments --- p.53Chapter 5.4.1 --- Speech Corpora --- p.53Chapter 5.4.2 --- Experimental Setup --- p.54Chapter 5.4.3 --- Results on Formal Cantonese Models --- p.55Chapter 5.4.4 --- Results on Colloquial Cantonese Models --- p.56Chapter 5.5 --- Analysis of Results --- p.58Chapter 5.6 --- Discussion --- p.59Chapter 5.6.1 --- Cantonese Language Modeling --- p.59Chapter 5.6.2 --- Interpolated Language Models --- p.59Chapter 5.6.3 --- Class-based Language Models --- p.60Chapter 6 --- Towards Language Modeling of Code-mixing Speech --- p.61Chapter 6.1 --- Data Collection --- p.61Chapter 6.1.1 --- Data Collection --- p.62Chapter 6.1.2 --- Filtering of Collected Data --- p.63Chapter 6.1.3 --- Processing of Collected Data --- p.63Chapter 6.2 --- Clustering of Chinese and English Words --- p.64Chapter 6.3 --- Language Modeling for Code-mixing Speech --- p.64Chapter 6.3.1 --- Language Models from Collected Data --- p.64Chapter 6.3.2 --- Class-based Language Models --- p.66Chapter 6.3.3 --- Performance Evaluation of Code-mixing Language Models --- p.67Chapter 6.4 --- Speech Recognition Experiments with Code-mixing Language Models --- p.69Chapter 6.4.1 --- Experimental Setup --- p.69Chapter 6.4.2 --- Monolingual Cantonese Recognition --- p.70Chapter 6.4.3 --- Code-mixing Speech Recognition --- p.72Chapter 6.5 --- Discussion --- p.74Chapter 6.5.1 --- Data Collection from the Internet --- p.74Chapter 6.5.2 --- Speech Recognition of Code-mixing Speech --- p.75Chapter 7 --- Conclusions and Future Work --- p.77Chapter 7.1 --- Conclusions --- p.77Chapter 7.1.1 --- Rule-based Translation Method --- p.77Chapter 7.1.2 --- Cantonese Language Modeling --- p.78Chapter 7.1.3 --- Code-mixing Language Modeling --- p.78Chapter 7.2 --- Future Works --- p.79Chapter 7.2.1 --- Rule-based Translation --- p.79Chapter 7.2.2 --- Training data --- p.80Chapter 7.2.3 --- Code-mixing speech --- p.80Chapter A --- Equation Derivation --- p.82Chapter A.l --- Relationship between Average Mutual Information and Perplexity --- p.82Bibliography --- p.8

    Improving Trigram Language Modeling with the World Wide Web

    No full text
    We propose a novel method for using the World Wide Web to acquire trigram estimates for statistical language modeling. We submit an N-gram as a phrase query to web search engines. The search engines return the number of web pages containing the phrase, from which the N-gram count is estimated. The N-gram counts are then used to form web-based trigram probability estimates. We discuss the properties of such estimates, and methods to interpolate them with traditional corpus based trigram estimates. We show that the interpolated models improve speech recognition word error rate significantly over a small test set

    An investigation of grammar design in natural-language speech-recognition.

    Get PDF
    With the growing interest and demand for human-machine interaction, much work concerning speech-recognition has been carried out over the past three decades. Although a variety of approaches have been proposed to address speech-recognition issues, such as stochastic (statistical) techniques, grammar-based techniques, techniques integrated with linguistic features, and other approaches, recognition accuracy and robustness remain among the major problems that need to be addressed. At the state of the art, most commercial speech products are constructed using grammar-based speech-recognition technology. In this thesis, we investigate a number of features involved in grammar design in natural-language speech-recognition technology. We hypothesize that: with the same domain, a semantic grammar, which directly encodes some semantic constraints into the recognition grammar, achieves better accuracy, but less robustness; a syntactic grammar defines a language with a larger size, thereby it has better robustness, but less accuracy; a word-sequence grammar, which includes neither semantics nor syntax, defines the largest language, therefore, is the most robust, but has very poor recognition accuracy. In this Master\u27s thesis, we claim that proper grammar design can achieve the appropriate compromise between recognition accuracy and robustness. The thesis has been proven by experiments using the IBM Voice-Server SDK, which consists of a VoiceXML browser, IBM ViaVoice Speech Recognition and Text-To-Speech (TTS) engines, sample applications, and other tools for developing and testing VoiceXML applications. The experimental grammars are written in the Java Speech Grammar Format (JSGF), and the testing applications are written in VoiceXML. The tentative experimental results suggest that grammar design is a good area for further study. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2003 .S555. Source: Masters Abstracts International, Volume: 43-01, page: 0244. Adviser: Richard A. Frost. Thesis (M.Sc.)--University of Windsor (Canada), 2004

    An investigation of the electrolytic plasma oxidation process for corrosion protection of pure magnesium and magnesium alloy AM50.

    Get PDF
    In this study, silicate and phosphate EPO coatings were produced on pure magnesium using an AC power source. It was found that the silicate coatings possess good wear resistance, while the phosphate coatings provide better corrosion protection. A Design of Experiment (DOE) technique, the Taguchi method, was used to systematically investigate the effect of the EPO process parameters on the corrosion protection properties of a coated magnesium alloy AM50 using a DC power. The experimental design consisted of four factors (treatment time, current density, and KOH and NaAlO2 concentrations), with three levels of each factor. Potentiodynamic polarization measurements were conducted to determine the corrosion resistance of the coated samples. The optimized processing parameters are 12 minutes, 12 mA/cm2 current density, 0.9 g/l KOH, 15.0 g/l NaAlO2. The results of the percentage contribution of each factor determined by the analysis of variance (ANOVA) imply that the KOH concentration is the most significant factor affecting the corrosion resistance of the coatings, while treatment time is a major factor affecting the thickness of the coatings. (Abstract shortened by UMI.)Dept. of Electrical and Computer Engineering. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .M323. Source: Masters Abstracts International, Volume: 44-03, page: 1479. Thesis (M.A.Sc.)--University of Windsor (Canada), 2005

    Using the Web Infrastructure for Real Time Recovery of Missing Web Pages

    Get PDF
    Given the dynamic nature of the World Wide Web, missing web pages, or 404 Page not Found responses, are part of our web browsing experience. It is our intuition that information on the web is rarely completely lost, it is just missing. In whole or in part, content often moves from one URI to another and hence it just needs to be (re-)discovered. We evaluate several methods for a \justin- time approach to web page preservation. We investigate the suitability of lexical signatures and web page titles to rediscover missing content. It is understood that web pages change over time which implies that the performance of these two methods depends on the age of the content. We therefore conduct a temporal study of the decay of lexical signatures and titles and estimate their half-life. We further propose the use of tags that users have created to annotate pages as well as the most salient terms derived from a page\u27s link neighborhood. We utilize the Memento framework to discover previous versions of web pages and to execute the above methods. We provide a work ow including a set of parameters that is most promising for the (re-)discovery of missing web pages. We introduce Synchronicity, a web browser add-on that implements this work ow. It works while the user is browsing and detects the occurrence of 404 errors automatically. When activated by the user Synchronicity offers a total of six methods to either rediscover the missing page at its new URI or discover an alternative page that satisfies the user\u27s information need. Synchronicity depends on user interaction which enables it to provide results in real time

    Improving Trigram Language Modeling with The World Wide Web

    No full text
    We propose a novel method for using the World Wide Web to acquire trigram estimates for statistical language modeling. We submit an N-gram as a phrase query to web search engines. The search engines return the number of web pages containing the phrase, from which the N-gram count is estimated. The N-gram counts are then used to form web-based trigram probability estimates. We discuss the properties of such estimates, and methods to interpolate them with traditional corpus based trigram estimates. We show that the interpolated models improve speech recognition word error rate significantly over a small test set. 1

    Dynamic language modeling for European Portuguese

    Get PDF
    Doutoramento em Engenharia InformáticaActualmente muitas das metodologias utilizadas para transcrição e indexação de transmissões noticiosas são baseadas em processos manuais. Com o processamento e transcrição deste tipo de dados os prestadores de serviços noticiosos procuram extrair informação semântica que permita a sua interpretação, sumarização, indexação e posterior disseminação selectiva. Pelo que, o desenvolvimento e implementação de técnicas automáticas para suporte deste tipo de tarefas têm suscitado ao longo dos últimos anos o interesse pela utilização de sistemas de reconhecimento automático de fala. Contudo, as especificidades que caracterizam este tipo de tarefas, nomeadamente a diversidade de tópicos presentes nos blocos de notícias, originam um elevado número de ocorrência de novas palavras não incluídas no vocabulário finito do sistema de reconhecimento, o que se traduz negativamente na qualidade das transcrições automáticas produzidas pelo mesmo. Para línguas altamente flexivas, como é o caso do Português Europeu, este problema torna-se ainda mais relevante. Para colmatar este tipo de problemas no sistema de reconhecimento, várias abordagens podem ser exploradas: a utilização de informações específicas de cada um dos blocos noticiosos a ser transcrito, como por exemplo os scripts previamente produzidos pelo pivot e restantes jornalistas, e outro tipo de fontes como notícias escritas diariamente disponibilizadas na Internet. Este trabalho engloba essencialmente três contribuições: um novo algoritmo para selecção e optimização do vocabulário, utilizando informação morfosintáctica de forma a compensar as diferenças linguísticas existentes entre os diferentes conjuntos de dados; uma metodologia diária para adaptação dinâmica e não supervisionada do modelo de linguagem, utilizando múltiplos passos de reconhecimento; metodologia para inclusão de novas palavras no vocabulário do sistema, mesmo em situações de não existência de dados de adaptação e sem necessidade re-estimação global do modelo de linguagem.Most of today methods for transcription and indexation of broadcast audio data are manual. Broadcasters process thousands hours of audio and video data on a daily basis, in order to transcribe that data, to extract semantic information, and to interpret and summarize the content of those documents. The development of automatic and efficient support for these manual tasks has been a great challenge and over the last decade there has been a growing interest in the usage of automatic speech recognition as a tool to provide automatic transcription and indexation of broadcast news and random and relevant access to large broadcast news databases. However, due to the common topic changing over time which characterizes this kind of tasks, the appearance of new events leads to high out-of-vocabulary (OOV) word rates and consequently to degradation of recognition performance. This is especially true for highly inflected languages like the European Portuguese language. Several innovative techniques can be exploited to reduce those errors. The use of news shows specific information, such as topic-based lexicons, pivot working script, and other sources such as the online written news daily available in the Internet can be added to the information sources employed by the automatic speech recognizer. In this thesis we are exploring the use of additional sources of information for vocabulary optimization and language model adaptation of a European Portuguese broadcast news transcription system. Hence, this thesis has 3 different main contributions: a novel approach for vocabulary selection using Part-Of-Speech (POS) tags to compensate for word usage differences across the various training corpora; language model adaptation frameworks performed on a daily basis for single-stage and multistage recognition approaches; a new method for inclusion of new words in the system vocabulary without the need of additional data or language model retraining
    corecore