3,228 research outputs found

    Fast speaker independent large vocabulary continuous speech recognition [online]

    Get PDF

    Code-Switched Urdu ASR for Noisy Telephonic Environment using Data Centric Approach with Hybrid HMM and CNN-TDNN

    Full text link
    Call Centers have huge amount of audio data which can be used for achieving valuable business insights and transcription of phone calls is manually tedious task. An effective Automated Speech Recognition system can accurately transcribe these calls for easy search through call history for specific context and content allowing automatic call monitoring, improving QoS through keyword search and sentiment analysis. ASR for Call Center requires more robustness as telephonic environment are generally noisy. Moreover, there are many low-resourced languages that are on verge of extinction which can be preserved with help of Automatic Speech Recognition Technology. Urdu is the 10th10^{th} most widely spoken language in the world, with 231,295,440 worldwide still remains a resource constrained language in ASR. Regional call-center conversations operate in local language, with a mix of English numbers and technical terms generally causing a "code-switching" problem. Hence, this paper describes an implementation framework of a resource efficient Automatic Speech Recognition/ Speech to Text System in a noisy call-center environment using Chain Hybrid HMM and CNN-TDNN for Code-Switched Urdu Language. Using Hybrid HMM-DNN approach allowed us to utilize the advantages of Neural Network with less labelled data. Adding CNN with TDNN has shown to work better in noisy environment due to CNN's additional frequency dimension which captures extra information from noisy speech, thus improving accuracy. We collected data from various open sources and labelled some of the unlabelled data after analysing its general context and content from Urdu language as well as from commonly used words from other languages, primarily English and were able to achieve WER of 5.2% with noisy as well as clean environment in isolated words or numbers as well as in continuous spontaneous speech.Comment: 32 pages, 19 figures, 2 tables, preprin

    Mining of Textual Data from the Web for Speech Recognition

    Get PDF
    Prvotním cílem tohoto projektu bylo prostudovat problematiku jazykového modelování pro rozpoznávání řeči a techniky pro získávání textových dat z Webu. Text představuje základní techniky rozpoznávání řeči a detailněji popisuje jazykové modely založené na statistických metodách. Zvláště se práce zabývá kriterii pro vyhodnocení kvality jazykových modelů a systémů pro rozpoznávání řeči. Text dále popisuje modely a techniky dolování dat, zvláště vyhledávání informací. Dále jsou představeny problémy spojené se získávání dat z webu, a v kontrastu s tím je představen vyhledávač Google. Součástí projektu byl návrh a implementace systému pro získávání textu z webu, jehož detailnímu popisu je věnována náležitá pozornost. Nicméně, hlavním cílem práce bylo ověřit, zda data získaná z Webu mohou mít nějaký přínos pro rozpoznávání řeči. Popsané techniky se tak snaží najít optimální způsob, jak data získaná z Webu použít pro zlepšení ukázkových jazykových modelů, ale i modelů nasazených v reálných rozpoznávacích systémech.The preliminary goals of this project were to get familiar with language modeling for speech recognition and techniques for acquisition of text data from the Web. Speech recognition techniques are introduced and statistical language modeling is described in detail. The text also covers mining models and techniques, information retrieval especially. Specific problems of Web mining are discussed and Google search is introduced. Special attention was paid to detailed description of implementation of the text mining system. However, the main goal of this work was to determine, whether the data acquired from the Web can provide some improvement into the recognition systems. The text is describing experiments, which use the retrieved Web data to update sample language models.

    Spoken content retrieval: A survey of techniques and technologies

    Get PDF
    Speech media, that is, digital audio and video containing spoken content, has blossomed in recent years. Large collections are accruing on the Internet as well as in private and enterprise settings. This growth has motivated extensive research on techniques and technologies that facilitate reliable indexing and retrieval. Spoken content retrieval (SCR) requires the combination of audio and speech processing technologies with methods from information retrieval (IR). SCR research initially investigated planned speech structured in document-like units, but has subsequently shifted focus to more informal spoken content produced spontaneously, outside of the studio and in conversational settings. This survey provides an overview of the field of SCR encompassing component technologies, the relationship of SCR to text IR and automatic speech recognition and user interaction issues. It is aimed at researchers with backgrounds in speech technology or IR who are seeking deeper insight on how these fields are integrated to support research and development, thus addressing the core challenges of SCR

    Language modeling for speech recognition of spoken Cantonese.

    Get PDF
    Yeung, Yu Ting.Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.Includes bibliographical references (leaves 84-93).Abstracts in English and Chinese.Acknowledgement --- p.iiiAbstract --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Cantonese Speech Recognition --- p.3Chapter 1.2 --- Objectives --- p.4Chapter 1.3 --- Thesis Outline --- p.5Chapter 2 --- Fundamentals of Large Vocabulary Continuous Speech Recognition --- p.7Chapter 2.1 --- Problem Formulation --- p.7Chapter 2.2 --- Feature Extraction --- p.8Chapter 2.3 --- Acoustic Models --- p.9Chapter 2.4 --- Decoding --- p.10Chapter 2.5 --- Statistical Language Modeling --- p.12Chapter 2.5.1 --- N-gram Language Models --- p.12Chapter 2.5.2 --- N-gram Smoothing --- p.13Chapter 2.5.3 --- Complexity of Language Model --- p.15Chapter 2.5.4 --- Class-based Langauge Model --- p.16Chapter 2.5.5 --- Language Model Pruning --- p.17Chapter 2.6 --- Performance Evaluation --- p.18Chapter 3 --- The Cantonese Dialect --- p.19Chapter 3.1 --- Phonology of Cantonese --- p.19Chapter 3.2 --- Orthographic Representation of Cantonese --- p.22Chapter 3.3 --- Classification of Cantonese speech --- p.25Chapter 3.4 --- Cantonese-English Code-mixing --- p.27Chapter 4 --- Rule-based Translation Method --- p.29Chapter 4.1 --- Motivations --- p.29Chapter 4.2 --- Transformation-based Learning --- p.30Chapter 4.2.1 --- Algorithm Overview --- p.30Chapter 4.2.2 --- Learning of Translation Rules --- p.32Chapter 4.3 --- Performance Evaluation --- p.35Chapter 4.3.1 --- The Learnt Translation Rules --- p.35Chapter 4.3.2 --- Evaluation of the Rules --- p.37Chapter 4.3.3 --- Analysis of the Rules --- p.37Chapter 4.4 --- Preparation of Training Data for Language Modeling --- p.41Chapter 4.5 --- Discussion --- p.43Chapter 5 --- Language Modeling for Cantonese --- p.44Chapter 5.1 --- Training Data --- p.44Chapter 5.1.1 --- Text Corpora --- p.44Chapter 5.1.2 --- Preparation of Formal Cantonese Text Data --- p.45Chapter 5.2 --- Training of Language Models --- p.46Chapter 5.2.1 --- Language Models for Standard Chinese --- p.46Chapter 5.2.2 --- Language Models for Formal Cantonese --- p.46Chapter 5.2.3 --- Language models for Colloquial Cantonese --- p.47Chapter 5.3 --- Evaluation of Language Models --- p.48Chapter 5.3.1 --- Speech Corpora for Evaluation --- p.48Chapter 5.3.2 --- Perplexities of Formal Cantonese Language Models --- p.49Chapter 5.3.3 --- Perplexities of Colloquial Cantonese Language Models --- p.51Chapter 5.4 --- Speech Recognition Experiments --- p.53Chapter 5.4.1 --- Speech Corpora --- p.53Chapter 5.4.2 --- Experimental Setup --- p.54Chapter 5.4.3 --- Results on Formal Cantonese Models --- p.55Chapter 5.4.4 --- Results on Colloquial Cantonese Models --- p.56Chapter 5.5 --- Analysis of Results --- p.58Chapter 5.6 --- Discussion --- p.59Chapter 5.6.1 --- Cantonese Language Modeling --- p.59Chapter 5.6.2 --- Interpolated Language Models --- p.59Chapter 5.6.3 --- Class-based Language Models --- p.60Chapter 6 --- Towards Language Modeling of Code-mixing Speech --- p.61Chapter 6.1 --- Data Collection --- p.61Chapter 6.1.1 --- Data Collection --- p.62Chapter 6.1.2 --- Filtering of Collected Data --- p.63Chapter 6.1.3 --- Processing of Collected Data --- p.63Chapter 6.2 --- Clustering of Chinese and English Words --- p.64Chapter 6.3 --- Language Modeling for Code-mixing Speech --- p.64Chapter 6.3.1 --- Language Models from Collected Data --- p.64Chapter 6.3.2 --- Class-based Language Models --- p.66Chapter 6.3.3 --- Performance Evaluation of Code-mixing Language Models --- p.67Chapter 6.4 --- Speech Recognition Experiments with Code-mixing Language Models --- p.69Chapter 6.4.1 --- Experimental Setup --- p.69Chapter 6.4.2 --- Monolingual Cantonese Recognition --- p.70Chapter 6.4.3 --- Code-mixing Speech Recognition --- p.72Chapter 6.5 --- Discussion --- p.74Chapter 6.5.1 --- Data Collection from the Internet --- p.74Chapter 6.5.2 --- Speech Recognition of Code-mixing Speech --- p.75Chapter 7 --- Conclusions and Future Work --- p.77Chapter 7.1 --- Conclusions --- p.77Chapter 7.1.1 --- Rule-based Translation Method --- p.77Chapter 7.1.2 --- Cantonese Language Modeling --- p.78Chapter 7.1.3 --- Code-mixing Language Modeling --- p.78Chapter 7.2 --- Future Works --- p.79Chapter 7.2.1 --- Rule-based Translation --- p.79Chapter 7.2.2 --- Training data --- p.80Chapter 7.2.3 --- Code-mixing speech --- p.80Chapter A --- Equation Derivation --- p.82Chapter A.l --- Relationship between Average Mutual Information and Perplexity --- p.82Bibliography --- p.8

    Artificial Neural Network methods applied to sentiment analysis

    Get PDF
    Sentiment Analysis (SA) is the study of opinions and emotions that are conveyed by text. This field of study has commercial applications for example in market research (e.g., “What do customers like and dislike about a product?”) and consumer behavior (e.g., “Which book will a customer buy next when he wrote a positive review about book X?”). A private person can benefit from SA by automatic movie or restaurant recommendations, or from applications on the computer or smart phone that adapt to the user’s current mood. In this thesis we will put forward research on artificial Neural Network (NN) methods applied to SA. Many challenges arise, such as sarcasm, domain dependency, and data scarcity, that need to be addressed by a successful system. In the first part of this thesis we perform linguistic analysis of a word (“hard”) under the light of SA. We show that sentiment-specific word sense disambiguation is necessary to distinguish fine nuances of polarity. Commonly available resources are not sufficient for this. The introduced Contextually Enhanced Sentiment Lexicon (CESL) is used to label occurrences of “hard” in a real dataset with its sense. That allows us to train a Support Vector Machine (SVM) with deep learning features that predicts the polarity of a single occurrence of the word, just given its context words. We show that the features we propose improve the result compared to existing standard features. Since the labeling effort is not negligible, we propose a clustering approach that reduces the manual effort to a minimum. The deep learning features that help predicting fine-grained, context-dependent polarity are computed by a Neural Network Language Model (NNLM), namely a variant of the Log-Bilinear Language model (LBL). By improving this model the performance of polarity classification might as well improve. Thus, we propose a non-linear version of the LBL and the vectorized Log-Bilinear Language model (vLBL), because non-linear models are generally considered more powerful. In a parameter study on a language modeling task, we show that the non-linear versions indeed perform better than their linear counterparts. However, the difference is small, except for settings where the model has only few parameters, which might be the case when little training data is available and the model therefore needs to be smaller in order to avoid overfitting. An alternative approach to fine-grained polarity classification as used above is to train classifiers that will do the distinction automatically. Due to the complexity of the task, the challenges of SA in general, and certain domain-specific issues (e.g., when using Twitter text) existing systems have much room to improve. Often statistical classifiers are used with simple Bag-of-Words (BOW) features or count features that stem from sentiment lexicons. We introduce a linguistically-informed Convolutional Neural Network (lingCNN) that builds upon the fact that there has been much research on language in general and sentiment lexicons in particular. lingCNN makes use of two types of linguistic features: word-based and sentence-based. Word-based features comprise features derived from sentiment lexicons, such as polarity or valence and general knowledge about language, such as a negation-based feature. Sentence-based features are also based on lexicon counts and valences. The combination of both types of features is superior to the original model without these features. Especially, when little training data is available (that can be the case for different languages that are underresourced), lingCNN proves to be significantly better (up to 12 macro-F1 points). Although, linguistic features in terms of sentiment lexicons are beneficial, their usage gives rise to a new set of problems. Most lexicons consist of infinitive forms of words only. Especially, lexicons for low-resource languages. However, the text that needs to be classified is unnormalized. Hence, we want to answer the question if morphological information is necessary for SA or if a system that neglects all this information and therefore can make better use of lexicons actually has an advantage. Our approach is to first stem or lemmatize a dataset and then perform polarity classification on it. On Czech and English datasets we show that better results can be achieved with normalization. As a positive side effect, we can compute better word embeddings by first normalizing the training corpus. This works especially well for languages that have rich morphology. We show on word similarity datasets for English, German, and Spanish that our embeddings improve performance. On a new WordNet-based evaluation we confirm these results on five different languages (Czech, English, German, Hungarian, and Spanish). The benefit of this new evaluation is further that it can be used for many other languages, as the only resource that is required is a WordNet. In the last part of the thesis, we use a recently introduced method to create an ultradense sentiment space out of generic word embeddings. This method allows us to compress 400 dimensional word embeddings down to 40 or even just 4 dimensions and still get similar results on a polarity classification task. While the training speed increases by a factor of 44, the difference in classification performance is not significant.Sentiment Analyse (SA) ist das Untersuchen von Meinungen und Emotionen die durch Text übermittelt werden. Dieses Forschungsgebiet findet kommerzielle Anwendungen in Marktforschung (z.B.: „Was mögen Kunden an einem Produkt (nicht)?“) und Konsumentenverhalten (z.B.: „Welches Buch wird ein Kunde als nächstes kaufen, nachdem er eine positive Rezension über Buch X geschrieben hat?“). Aber auch als Privatperson kann man von Forschung in SA profitieren. Beispiele hierfür sind automatisch erstellte Film- oder Restaurantempfehlungen oder Anwendungen auf Computer oder Smartphone die sich der aktuellen Stimmungslage des Benutzers anpassen. In dieser Arbeit werden wir Forschung auf dem Gebiet der Neuronen Netze (NN) angewendet auf SA vorantreiben. Dabei ergeben sich viele Herausforderungen, wie Sarkasmus, Domänenabhängigkeit und Datenarmut, die ein erfolgreiches System angehen muss. Im ersten Teil der Arbeit führen wir eine linguistische Analyse des englischen Wortes „hard“ in Hinblick auf SA durch. Wir zeigen, dass sentiment-spezifische Wortbedeutungsdisambiguierung notwendig ist, um feine Nuancen von Polarität (positive vs. negative Stimmung) unterscheiden zu können. Häufig verwendete, frei verfügbare Ressourcen sind dafür nicht ausreichend. Daher stellen wir CESL (Contextually Enhanced Sentiment Lexicon), ein sentiment-spezifisches Bedeutungslexicon vor, welches verwendet wird, um Vorkommen von „hard“ in einem realen Datensatz mit seinen Bedeutungen zu versehen. Das Lexikon erlaubt es eine Support Vector Machine (SVM) mit Features aus dem Deep Learning zu trainieren, die in der Lage ist, die Polarität eines Vorkommens nur anhand seiner Kontextwörter vorherzusagen. Wir zeigen, dass die vorgestellten Features die Ergebnisse der SVM verglichen mit Standard-Features verbessern. Da der Aufwand für das Erstellen von markierten Trainingsdaten nicht zu unterschätzen ist, stellen wir einen Clustering-Ansatz vor, der den manuellen Markierungsaufwand auf ein Minimum reduziert. Die Deep Learning Features, die die Vorhersage von feingranularer, kontextabhängiger Polarität verbessern, werden mittels eines neuronalen Sprachmodells, genauer eines Log-Bilinear Language model (LBL)s, berechnet. Wenn man dieses Modell verbessert, wird vermutlich auch das Ergebnis der Polaritätsklassifikation verbessert. Daher führen wir nichtlineare Versionen des LBL und vectorized Log-Bilinear Language model (vLBL) ein, weil nichtlineare Modelle generell als mächtiger angesehen werden. In einer Parameterstudie zur Sprachmodellierung zeigen wir, dass nichtlineare Modelle tatsächlich besser abschneiden, als ihre linearen Gegenstücke. Allerdings ist der Unterschied gering, es sei denn die Modelle können nur auf wenige Parameter zurückgreifen. So etwas kommt zum Beispiel vor, wenn nur wenige Trainingsdaten verfügbar sind und das Modell deshalb kleiner sein muss, um Überanpassung zu verhindern. Ein alternativer Ansatz zur feingranularen Polaritätsklassifikation wie oben verwendet, ist es, einen Klassifikator zu trainieren, der die Unterscheidung automatisch vornimmt. Durch die Komplexität der Aufgabe, der Herausforderungen von SA im Allgemeinen und speziellen domänenspezifischen Problemen (z.B.: wenn Twitter-Daten verwendet werden) haben existierende Systeme noch immer großes Optimierungspotential. Oftmals verwenden statistische Klassifikatoren einfache Bag-of-Words (BOW)-Features. Alternativ kommen Zähl-Features zum Einsatz, die auf Sentiment-Lexika aufsetzen. Wir stellen linguistically-informed Convolutional Neural Network (lingCNN) vor, dass auf dem Fakt beruht, dass bereits viel Forschung in Sprachen und Sentiment-Lexika geflossen ist. lingCNN macht von zwei linguistischen Feature-Typen Gebrauch: wortbasierte und satzbasierte. Wort-basierte Features umfassen Features die von Sentiment-Lexika, wie Polarität oder Valenz (die Stärke der Polarität) und generellem Wissen über Sprache, z.B.: Verneinung, herrühren. Satzbasierte Features basieren ebenfalls auf Zähl-Features von Lexika und auf Valenzen. Die Kombination beider Feature-Typen ist dem Originalmodell ohne linguistische Features überlegen. Besonders wenn wenige Trainingsdatensätze vorhanden sind (das kann der Fall für Sprachen sein, die weniger erforscht sind als englisch). lingCNN schneidet signifikant besser ab (bis zu 12 macro-F1 Punkte). Obwohl linguistische Features basierend auf Sentiment-Lexika vorteilhaft sind, führt deren Verwendung zu neuen Problemen. Der Großteil der Lexika enthält nur Infinitivformen der Wörter. Dies gilt insbesondere für Sprachen mit wenigen Ressourcen. Das ist eine Herausforderung, weil der Text der klassifiziert werden soll in der Regel nicht normalisiert ist. Daher wollen wir die Frage beantworten, ob morphologische Information für SA überhaupt notwendig ist oder ob ein System, dass jegliche morphologische Information ignoriert und dadurch bessere Verwendung der Lexika erzielt, einen Vorteil genießt. Unser Ansatz besteht aus Stemming und Lemmatisierung des Datensatzes, bevor dann die Polaritätsklassifikation durchgeführt wird. Auf englischen und tschechischen Daten zeigen wir, dass durch Normalisierung bessere Ergebnisse erzielt werden. Als positiven Nebeneffekt kann man bessere Wortrepresentationen (engl. word embeddings) berechnen, indem das Trainingskorpus zuerst normalisiert wird. Das funktioniert besonders gut für morphologisch reiche Sprachen. Wir zeigen auf Datensätzen zur Wortähnlichkeit für deutsch, englisch und spanisch, dass unsere Wortrepresentationen die Ergebnisse verbessern. In einer neuen WordNet-basierten Evaluation bestätigen wir diese Ergebnisse für fünf verschiedene Sprachen (deutsch, englisch, spanisch, tschechisch und ungarisch). Der Vorteil dieser Evaluation ist weiterhin, dass sie für viele Sprachen angewendet werden kann, weil sie lediglich ein WordNet als Ressource benötigt. Im letzten Teil der Arbeit verwenden wir eine kürzlich vorgestellte Methode zur Erstellen eines ultradichten Sentiment-Raumes aus generischen Wortrepresentationen. Diese Methode erlaubt es uns 400 dimensionale Wortrepresentationen auf 40 oder sogar nur 4 Dimensionen zu komprimieren und weiterhin die gleichen Resultate in Polaritätsklassifikation zu erhalten. Während die Trainingsgeschwindigkeit um einen Faktor von 44 verbessert wird, sind die Unterschiede in der Polaritätsklassifikation nicht signifikant
    corecore