814 research outputs found

    Unsupervised spoken keyword spotting and learning of acoustically meaningful units

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 103-106).The problem of keyword spotting in audio data has been explored for many years. Typically researchers use supervised methods to train statistical models to detect keyword instances. However, such supervised methods require large quantities of annotated data that is unlikely to be available for the majority of languages in the world. This thesis addresses this lack-of-annotation problem and presents two completely unsupervised spoken keyword spotting systems that do not require any transcribed data. In the first system, a Gaussian Mixture Model is trained to label speech frames with a Gaussian posteriorgram, without any transcription information. Given several spoken samples of a keyword, a segmental dynamic time warping is used to compare the Gaussian posteriorgrams between keyword samples and test utterances. The keyword detection result is then obtained by ranking the distortion scores of all the test utterances. In the second system, to avoid the need for spoken samples, a Joint-Multigram model is used to build a mapping from the keyword text samples to the Gaussian component indices. A keyword instance in the test data can be detected by calculating the similarity score of the Gaussian component index sequences between keyword samples and test utterances. The proposed two systems are evaluated on the TIMIT and MIT Lecture corpus. The result demonstrates the viability and effectiveness of the two systems. Furthermore, encouraged by the success of using unsupervised methods to perform keyword spotting, we present some preliminary investigation on the unsupervised detection of acoustically meaningful units in speech.by Yaodong Zhang.S.M

    Productivity Measurement of Call Centre Agents using a Multimodal Classification Approach

    Get PDF
    Call centre channels play a cornerstone role in business communications and transactions, especially in challenging business situations. Operations’ efficiency, service quality, and resource productivity are core aspects of call centres’ competitive advantage in rapid market competition. Performance evaluation in call centres is challenging due to human subjective evaluation, manual assortment to massive calls, and inequality in evaluations because of different raters. These challenges impact these operations' efficiency and lead to frustrated customers. This study aims to automate performance evaluation in call centres using various deep learning approaches. Calls recorded in a call centre are modelled and classified into high- or low-performance evaluations categorised as productive or nonproductive calls. The proposed conceptual model considers a deep learning network approach to model the recorded calls as text and speech. It is based on the following: 1) focus on the technical part of agent performance, 2) objective evaluation of the corpus, 3) extension of features for both text and speech, and 4) combination of the best accuracy from text and speech data using a multimodal structure. Accordingly, the diarisation algorithm extracts that part of the call where the agent is talking from which the customer is doing so. Manual annotation is also necessary to divide the modelling corpus into productive and nonproductive (supervised training). Krippendorff’s alpha was applied to avoid subjectivity in the manual annotation. Arabic speech recognition is then developed to transcribe the speech into text. The text features are the words embedded using the embedding layer. The speech features make several attempts to use the Mel Frequency Cepstral Coefficient (MFCC) upgraded with Low-Level Descriptors (LLD) to improve classification accuracy. The data modelling architectures for speech and text are based on CNNs, BiLSTMs, and the attention layer. The multimodal approach follows the generated models to improve performance accuracy by concatenating the text and speech models using the joint representation methodology. The main contributions of this thesis are: • Developing an Arabic Speech recognition method for automatic transcription of speech into text. • Drawing several DNN architectures to improve performance evaluation using speech features based on MFCC and LLD. • Developing a Max Weight Similarity (MWS) function to outperform the SoftMax function used in the attention layer. • Proposing a multimodal approach for combining the text and speech models for best performance evaluation

    Design of an Offline Handwriting Recognition System Tested on the Bangla and Korean Scripts

    Get PDF
    This dissertation presents a flexible and robust offline handwriting recognition system which is tested on the Bangla and Korean scripts. Offline handwriting recognition is one of the most challenging and yet to be solved problems in machine learning. While a few popular scripts (like Latin) have received a lot of attention, many other widely used scripts (like Bangla) have seen very little progress. Features such as connectedness and vowels structured as diacritics make it a challenging script to recognize. A simple and robust design for offline recognition is presented which not only works reliably, but also can be used for almost any alphabetic writing system. The framework has been rigorously tested for Bangla and demonstrated how it can be transformed to apply to other scripts through experiments on the Korean script whose two-dimensional arrangement of characters makes it a challenge to recognize. The base of this design is a character spotting network which detects the location of different script elements (such as characters, diacritics) from an unsegmented word image. A transcript is formed from the detected classes based on their corresponding location information. This is the first reported lexicon-free offline recognition system for Bangla and achieves a Character Recognition Accuracy (CRA) of 94.8%. This is also one of the most flexible architectures ever presented. Recognition of Korean was achieved with a 91.2% CRA. Also, a powerful technique of autonomous tagging was developed which can drastically reduce the effort of preparing a dataset for any script. The combination of the character spotting method and the autonomous tagging brings the entire offline recognition problem very close to a singular solution. Additionally, a database named the Boise State Bangla Handwriting Dataset was developed. This is one of the richest offline datasets currently available for Bangla and this has been made publicly accessible to accelerate the research progress. Many other tools were developed and experiments were conducted to more rigorously validate this framework by evaluating the method against external datasets (CMATERdb 1.1.1, Indic Word Dataset and REID2019: Early Indian Printed Documents). Offline handwriting recognition is an extremely promising technology and the outcome of this research moves the field significantly ahead

    Evaluation of preprocessors for neural network speaker verification

    Get PDF

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches

    A machine learning taxonomic classifier for science publications

    Get PDF
    Dissertação de mestrado integrado em Engineering and Management of Information SystemsThe evolution in scientific production, associated with the growing interdomain collaboration of knowledge and the increasing co-authorship of scientific works remains supported by processes of manual, highly subjective classification, subject to misinterpretation. The very taxonomy on which this same classification process is based is not consensual, with governmental organizations resorting to taxonomies that do not keep up with changes in scientific areas, and indexers / repositories that seek to keep up with those changes. We find a reality distinct from what is expected and that the domains where scientific work is recorded can easily be misrepresentative of the work itself. The taxonomy applied today by governmental bodies, such as the one that regulates scientific production in Portugal, is not enough, is limiting, and promotes classification in areas close to the desired, therefore with great potential for error. An automatic classification process based on machine learning algorithms presents itself as a possible solution to the subjectivity problem in classification, and while it does not solve the issue of taxonomy mismatch this work shows this possibility with proved results. In this work, we propose a classification taxonomy, as well as we develop a process based on machine learning algorithms to solve the classification problem. We also present a set of directions for future work for an increasingly representative classification of evolution in science, which is not intended as airtight, but flexible and perhaps increasingly based on phenomena and not just disciplines.A evolução na produção de ciência, associada à crescente colaboração interdomínios do conhecimento e à também crescente coautoria de trabalhos permanece suportada por processos de classificação manual, subjetiva e sujeita a interpretações erradas. A própria taxonomia na qual assenta esse mesmo processo de classificação não é consensual, com organismos estatais a recorrerem a taxonomias que não acompanham as alterações nas áreas científicas, e indexadores/repositórios que procuram acompanhar essas mesmas alterações. Verificamos uma realidade distinta do espectável e que os domínios onde são registados os trabalhos científicos podem facilmente estar desenquadrados. A taxonomia hoje aplicada pelos organismos governamentais, como o caso do organismo que regulamenta a produção científica em Portugal, não é suficiente, é limitadora, e promove a classificação em domínios aproximados do desejado, logo com grande potencial para erro. Um processo de classificação automática com base em algoritmos de machine learning apresenta-se como uma possível solução para o problema da subjetividade na classificação, e embora não resolva a questão do desenquadramento da taxonomia utilizada, é apresentada neste trabalho como uma possibilidade comprovada. Neste trabalho propomos uma taxonomia de classificação, bem como nós desenvolvemos um processo baseado em machine learning algoritmos para resolver o problema de classificação. Apresentamos ainda um conjunto de direções para trabalhos futuros para uma classificação cada vez mais representativa da evolução nas ciências, que não pretende ser hermética, mas flexível e talvez cada vez mais baseada em fenómenos e não apenas em disciplinas

    Bounded Support Finite Mixtures for Multidimensional Data Modeling and Clustering

    Get PDF
    Data is ever increasing with today’s many technological advances in terms of both quantity and dimensions. Such inflation has posed various challenges in statistical and data analysis methods and hence requires the development of new powerful models for transforming the data into useful information. Therefore, it was necessary to explore and develop new ideas and techniques to keep pace with challenging learning applications in data analysis, modeling and pattern recognition. Finite mixture models have received considerable attention due to their ability to effectively and efficiently model high dimensional data. In mixtures, choice of distribution is a critical issue and it has been observed that in many real life applications, data exist in a bounded support region, whereas distributions adopted to model the data lie in unbounded support regions. Therefore, it was proposed to define bounded support distributions in mixtures and introduce a modified procedure for parameters estimation by considering the bounded support of underlying distributions. The main goal of this thesis is to introduce bounded support mixtures, their parameters estimation, automatic determination of number of mixture components and application of mixtures in feature extraction techniques to overall improve the learning pipeline. Five different unbounded support distributions are selected for applying the idea of bounded support mixtures and modified parameters estimation using maximum likelihood via Expectation-Maximization (EM). Probability density functions selected for this thesis include Gaussian, Laplace, generalized Gaussian, asymmetric Gaussian and asymmetric generalized Gaussian distributions, which are chosen due to their flexibility and broad applications in speech and image processing. The proposed bounded support mixtures are applied in various speech and images datasets to create leaning applications to demonstrate the effectiveness of proposed approach. Mixtures of bounded Gaussian and bounded Laplace are also applied in feature extraction and data representation techniques, which further improves the learning and modeling capability of underlying models. The proposed feature representation via bounded support mixtures is applied in both speech and images datasets to examine its performance. Automatic selection of number of mixture components is very important in clustering and parameter learning is highly dependent on model selection and it is proposed for mixture of bounded Gaussian and bounded asymmetric generalized Gaussian using minimum message length. Proposed model selection criterion and parameter learning are simultaneously applied in speech and images datasets for both models to examine the model selection performance in clustering

    Text Mining for Protein-Protein Docking

    Get PDF
    Scientific publications are a rich but underutilized source of structural and functional information on proteins and protein interactions. Although scientific literature is intended for human audience, text mining makes it amenable to algorithmic processing. It can focus on extracting information relevant to protein binding modes, providing specific residues that are likely be at the binding site for a given pair of proteins. The knowledge of such residues is a powerful guide for the structural modeling of protein-protein complexes. This work combines and extends two well-established areas of research: the non-structural identification of protein-protein interactors, and structure-based detection of functional (small-ligand) sites on proteins. Text-mining based constraints for protein-protein docking is a unique research direction, which has not been explored prior to this study. Although text mining by itself is unlikely to produce docked models, it is useful in scoring of the docking predictions. Our results show that despite presence of false positives, text mining significantly improves the docking quality. To purge false positives in the mined residues, along with the basic text-mining, this work explores enhanced text mining techniques, using various language processing tools, from simple dictionaries, to WordNet (a generic word ontology), parse trees, word vectors and deep recursive neural networks. The results significantly increase confidence in the generated docking constraints and provide guidelines for the future development of this modeling approach. With the rapid growth of the body of publicly available biomedical literature, and new evolving text-mining methodologies, the approach will become more powerful and adequate to the needs of biomedical community
    • …
    corecore