626 research outputs found

    A Unified multilingual semantic representation of concepts

    Get PDF
    Semantic representation lies at the core of several applications in Natural Language Processing. However, most existing semantic representation techniques cannot be used effectively for the representation of individual word senses. We put forward a novel multilingual concept representation, called MUFFIN , which not only enables accurate representation of word senses in different languages, but also provides multiple advantages over existing approaches. MUFFIN represents a given concept in a unified semantic space irrespective of the language of interest, enabling cross-lingual comparison of different concepts. We evaluate our approach in two different evaluation benchmarks, semantic similarity and Word Sense Disambiguation, reporting state-of-the-art performance on several standard datasets

    A framework for the construction of monolingual and cross-lingual Word Similarity Datasets

    Get PDF
    Despite being one of the most popular tasks in lexical semantics, word similarity has often been limited to the English language. Other languages, even those that are widely spoken such as Spanish, do not have a reliable word similarity evaluation framework. We put forward robust methodologies for the extension of existing English datasets to other languages, both at monolingual and cross-lingual levels. We propose an automatic standardization for the construction of cross-lingual similarity datasets, and provide an evaluation, demonstrating its reliability and robustness. Based on our procedure and taking the RG-65 word similarity dataset as a reference, we release two high-quality Spanish and Farsi (Persian) monolingual datasets, and fifteen cross-lingual datasets for six languages: English, Spanish, French, German, Portuguese, and Farsi

    Using the Outlier Detection Task to Evaluate Distributional Semantic Models

    Get PDF
    In this article, we define the outlier detection task and use it to compare neural-based word embeddings with transparent count-based distributional representations. Using the English Wikipedia as a text source to train the models, we observed that embeddings outperform count-based representations when their contexts are made up of bag-of-words. However, there are no sharp differences between the two models if the word contexts are defined as syntactic dependencies. In general, syntax-based models tend to perform better than those based on bag-of-words for this specific task. Similar experiments were carried out for Portuguese with similar results. The test datasets we have created for the outlier detection task in English and Portuguese are freely availableThis work was supported by a 2016 BBVA Foundation Grant for Researchers and Cultural Creators and by Project TELEPARES, Ministry of Economy and Competitiveness (FFI2014-51978-C2-1-R). It has received financial support from the Consellería de Cultura, Educación e Ordenación Universitaria (accreditation 2016–2019, ED431G/08) and the European Regional Development Fund (ERDF)S

    Computing semantic relatedness using DBPedia

    Get PDF
    Extracting the semantic relatedness of terms is an important topic in several areas, including data mining, information retrieval and web recommendation. This paper presents an approach for computing the semantic relatedness of terms using the knowledge base of DBpedia — a community effort to extract structured information from Wikipedia. Several approaches to extract semantic relatedness from Wikipedia using bag-of-words vector models are already available in the literature. The research presented in this paper explores a novel approach using paths on an ontological graph extracted from DBpedia. It is based on an algorithm for finding and weighting a collection of paths connecting concept nodes. This algorithm was implemented on a tool called Shakti that extract relevant ontological data for a given domain from DBpedia using its SPARQL endpoint. To validate the proposed approach Shakti was used to recommend web pages on a Portuguese social site related to alternative music and the results of that experiment are reported in this paper

    Computing Semantic Relatedness using DBPedia

    Get PDF
    Extracting the semantic relatedness of terms is an important topic in several areas, including data mining, information retrieval and web recommendation. This paper presents an approach for computing the semantic relatedness of terms using the knowledge base of DBpedia - a community effort to extract structured information from Wikipedia. Several approaches to extract semantic relatedness from Wikipedia using bag-of-words vector models are already available in the literature. The research presented in this paper explores a novel approach using paths on an ontological graph extracted from DBpedia. It is based on an algorithm for finding and weighting a collection of paths connecting concept nodes. This algorithm was implemented on a tool called Shakti that extract relevant ontological data for a given domain from DBpedia using its SPARQL endpoint. To validate the proposed approach Shakti was used to recommend web pages on a Portuguese social site related to alternative music and the results of that experiment are reported in this paper

    Automatic extraction of concepts from texts and applications

    Get PDF
    The extraction of relevant terms from texts is an extensively researched task in Text- Mining. Relevant terms have been applied in areas such as Information Retrieval or document clustering and classification. However, relevance has a rather fuzzy nature since the classification of some terms as relevant or not relevant is not consensual. For instance, while words such as "president" and "republic" are generally considered relevant by human evaluators, and words like "the" and "or" are not, terms such as "read" and "finish" gather no consensus about their semantic and informativeness. Concepts, on the other hand, have a less fuzzy nature. Therefore, instead of deciding on the relevance of a term during the extraction phase, as most extractors do, I propose to first extract, from texts, what I have called generic concepts (all concepts) and postpone the decision about relevance for downstream applications, accordingly to their needs. For instance, a keyword extractor may assume that the most relevant keywords are the most frequent concepts on the documents. Moreover, most statistical extractors are incapable of extracting single-word and multi-word expressions using the same methodology. These factors led to the development of the ConceptExtractor, a statistical and language-independent methodology which is explained in Part I of this thesis. In Part II, I will show that the automatic extraction of concepts has great applicability. For instance, for the extraction of keywords from documents, using the Tf-Idf metric only on concepts yields better results than using Tf-Idf without concepts, specially for multi-words. In addition, since concepts can be semantically related to other concepts, this allows us to build implicit document descriptors. These applications led to published work. Finally, I will present some work that, although not published yet, is briefly discussed in this document.Fundação para a Ciência e a Tecnologia - SFRH/BD/61543/200

    A question-answering machine learning system for FAQs

    Get PDF
    With the increase in usage and dependence on the internet for gathering information, it’s now essential to efficiently retrieve information according to users’ needs. Question Answering (QA) systems aim to fulfill this need by trying to provide the most relevant answer for a user’s query expressed in natural language text or speech. Virtual assistants like Apple Siri and automated FAQ systems have become very popular and with this the constant rush of developing an efficient, advanced and expedient QA system is reaching new limits. In the field of QA systems, this thesis addresses the problem of finding the FAQ question that is most similar to a user’s query. Finding semantic similarities between database question banks and natural language text is its foremost step. The work aims at exploring unsupervised approaches for measuring semantic similarities for developing a closed domain QA system. To meet this objective modern sentence representation techniques, such as BERT and FLAIR GloVe, are coupled with various similarity measures (cosine, Euclidean and Manhattan) to identify the best model. The developed models were tested with three FAQs and SemEval 2015 datasets for English language; the best results were obtained from the coupling of BERT embedding with Euclidean distance similarity measure with a performance of 85.956% on a FAQ dataset. The model is also tested for Portuguese language with Portuguese Health support phone line SNS24 dataset; Sumário: Um sistema de pergunta-resposta de aprendizagem automatica para FAQs Com o aumento da utilização e da dependência da internet para a recolha de informação, tornou-se essencial recuperar a informação de forma eficiente de acordo com as necessidades dos utilizadores. Os Sistemas de Pergunta- Resposta (PR) visam responder a essa necessidade, tentando fornecer a resposta mais relevante para a consulta de um utilizador expressa em texto em linguagem natural escrita ou falada. Os assistentes virtuais como o Apple Siri e sistemas automatizados de perguntas frequentes tornaram-se muito populares aumentando a necessidade de desenvolver um sistema de controle de qualidade eficiente, avançado e conveniente. No campo dos sistemas de PR, esta dissertação aborda o problema de encontrar a pergunta que mais se assemelha à consulta de um utilizador. Encontrar semelhanças semânticas entre a base de dados de perguntas e o texto em linguagem natural é a sua etapa mais importante. Neste sentido, esta dissertação tem como objetivo explorar abordagens não supervisionadas para medir similaridades semânticas para o desenvolvimento de um sistema de pergunta-resposta de domínio fechado. Neste sentido, técnicas modernas de representação de frases como o BERT e FLAIR GloVe são utilizadas em conjunto com várias medidas de similaridade (cosseno, Euclidiana e Manhattan) para identificar os melhores modelos. Os modelos desenvolvidos foram testados com conjuntos de dados de três FAQ e o SemEval 2015; os melhores resultados foram obtidos da combinação entre modelos de embedding BERT e a distância euclidiana, tendo-se obtido um desempenho máximo de 85,956% num conjunto de dados FAQ. O modelo também é testado para a língua portuguesa com o conjunto de dados SNS24 da linha telefónica de suporte de saúde em português

    Evaluation of Distributional Models with the Outlier Detection Task

    Get PDF
    In this article, we define the outlier detection task and use it to compare neural-based word embeddings with transparent count-based distributional representations. Using the English Wikipedia as text source to train the models, we observed that embeddings outperform count-based representations when their contexts are made up of bag-of-words. However, there are no sharp differences between the two models if the word contexts are defined as syntactic dependencies. In general, syntax-based models tend to perform better than those based on bag-of-words for this specific task. Similar experiments were carried out for Portuguese with similar results. The test datasets we have created for outlier detection task in English and Portuguese are released

    Web 2.0, language resources and standards to automatically build a multilingual named entity lexicon

    Get PDF
    This paper proposes to advance in the current state-of-the-art of automatic Language Resource (LR) building by taking into consideration three elements: (i) the knowledge available in existing LRs, (ii) the vast amount of information available from the collaborative paradigm that has emerged from the Web 2.0 and (iii) the use of standards to improve interoperability. We present a case study in which a set of LRs for different languages (WordNet for English and Spanish and Parole-Simple-Clips for Italian) are extended with Named Entities (NE) by exploiting Wikipedia and the aforementioned LRs. The practical result is a multilingual NE lexicon connected to these LRs and to two ontologies: SUMO and SIMPLE. Furthermore, the paper addresses an important problem which affects the Computational Linguistics area in the present, interoperability, by making use of the ISO LMF standard to encode this lexicon. The different steps of the procedure (mapping, disambiguation, extraction, NE identification and postprocessing) are comprehensively explained and evaluated. The resulting resource contains 974,567, 137,583 and 125,806 NEs for English, Spanish and Italian respectively. Finally, in order to check the usefulness of the constructed resource, we apply it into a state-of-the-art Question Answering system and evaluate its impact; the NE lexicon improves the system’s accuracy by 28.1%. Compared to previous approaches to build NE repositories, the current proposal represents a step forward in terms of automation, language independence, amount of NEs acquired and richness of the information represented
    corecore