61 research outputs found

    An Empirical Evaluation of Zero Resource Acoustic Unit Discovery

    Full text link
    Acoustic unit discovery (AUD) is a process of automatically identifying a categorical acoustic unit inventory from speech and producing corresponding acoustic unit tokenizations. AUD provides an important avenue for unsupervised acoustic model training in a zero resource setting where expert-provided linguistic knowledge and transcribed speech are unavailable. Therefore, to further facilitate zero-resource AUD process, in this paper, we demonstrate acoustic feature representations can be significantly improved by (i) performing linear discriminant analysis (LDA) in an unsupervised self-trained fashion, and (ii) leveraging resources of other languages through building a multilingual bottleneck (BN) feature extractor to give effective cross-lingual generalization. Moreover, we perform comprehensive evaluations of AUD efficacy on multiple downstream speech applications, and their correlated performance suggests that AUD evaluations are feasible using different alternative language resources when only a subset of these evaluation resources can be available in typical zero resource applications.Comment: 5 pages, 1 figure; Accepted for publication at ICASSP 201

    On User Modelling for Personalised News Video Recommendation

    Get PDF
    In this paper, we introduce a novel approach for modelling user interests. Our approach captures users evolving information needs, identifies aspects of their need and recommends relevant news items to the users. We introduce our approach within the context of personalised news video retrieval. A news video data set is used for experimentation. We employ a simulated user evaluation

    Distantly Labeling Data for Large Scale Cross-Document Coreference

    Full text link
    Cross-document coreference, the problem of resolving entity mentions across multi-document collections, is crucial to automated knowledge base construction and data mining tasks. However, the scarcity of large labeled data sets has hindered supervised machine learning research for this task. In this paper we develop and demonstrate an approach based on ``distantly-labeling'' a data set from which we can train a discriminative cross-document coreference model. In particular we build a dataset of more than a million people mentions extracted from 3.5 years of New York Times articles, leverage Wikipedia for distant labeling with a generative model (and measure the reliability of such labeling); then we train and evaluate a conditional random field coreference model that has factors on cross-document entities as well as mention-pairs. This coreference model obtains high accuracy in resolving mentions and entities that are not present in the training data, indicating applicability to non-Wikipedia data. Given the large amount of data, our work is also an exercise demonstrating the scalability of our approach.Comment: 16 pages, submitted to ECML 201

    Semantic User Modelling for Personal News Video Retrieval

    Get PDF
    There is a need for personalised news video retrieval due to the explosion of news materials available through broadcast and other channels. In this work we introduce a semantic based user modeling technique to capture the users’ evolving information needs. Our approach exploits the Linked Open Data Cloud to capture and organise users’ interests. The organised interests are used to retrieve and recommend news stories to users. The system monitors user interaction with its interface and uses this information for capturing their evolving interests in the news. New relevant materials are fetched and presented to the user based on their interests. A user-centred evaluation was conducted and the results show the promise of our approach

    A Comparative Study of Machine Learning Approaches- SVM and LS-SVM using a Web Search Engine Based Application

    Get PDF
    Abstract — Semantic similarity refers to the concept by which a set of documents or words within the documents are assigned a weight based on their meaning. The accurate measurement of such similarity plays important roles in Natural language Processing and Information Retrieval tasks such as Query Expansion and Word Sense Disambiguation. Page counts and snippets retrieved by the search engines help to measure the semantic similarity between two words. Different similarity scores are calculated for the queried conjunctive word. Lexical pattern extraction algorithm identifies the patterns from the snippets. Two machine learning approaches- Support Vector Machine and Latent Structural Support Vector Machine are used for measuring semantic similarity between two words by combining the similarity scores from page counts and cluster of patterns retrieved from the snippets. A comparative study is made between the similarity results from both the machines. SVM classifies between synonymous and non-synonymous words using maximum marginal hyper plane. LS-SVM shows a much more accurate result by considering the latent values in the dataset

    PERSON NAME RECOGNITION IN ASR OUTPUTS USING CONTINUOUS CONTEXT MODELS

    Get PDF
    ABSTRACT The detection and characterization, in audiovisual documents, of speech utterances where person names are pronounced, is an important cue for spoken content analysis. This paper tackles the problematic of retrieving spoken person names in the 1-Best ASR outputs of broadcast TV shows. Our assumption is that a person name is a latent variable produced by the lexical context it appears in. Thereby, a spoken name could be derived from ASR outputs even if it has not been proposed by the speech recognition system. A new context modelling is proposed in order to capture lexical and structural information surrounding a spoken name. The fundamental hypothesis of this study has been validated on broadcast TV documents available in the context of the REPERE challenge

    Domain-independent entity coreference in RDF graphs

    Full text link
    • …
    corecore