48 research outputs found

    Propuesta para una semántica de las dependencias sintácticas

    Get PDF
    El principal objetivo de este artículo es proponer un modelo formal del proceso de interpretación semántica de las dependencias sintácticas. Definiremos una dependencia sintáctica como una operación binaria que toma como argumentos las denotaciones de dos palabras relacionadas (núcleo y modificador), y devuelve una reordenación de sus denotaciones. Asumimos que esta operación binaria desempeña un papel esencial en el proceso de interpretación semántica

    Compositional Distributional Semantics with Syntactic Dependencies and Selectional Preferences

    Get PDF
    This article describes a compositional model based on syntactic dependencies which has been designed to build contextualized word vectors, by following linguistic principles related to the concept of selectional preferences. The compositional strategy proposed in the current work has been evaluated on a syntactically controlled and multilingual dataset, and compared with Transformer BERT-like models, such as Sentence BERT, the state-of-the-art in sentence similarity. For this purpose, we created two new test datasets for Portuguese and Spanish on the basis of that defined for the English language, containing expressions with noun-verb-noun transitive constructions. The results we have obtained show that the linguistic-based compositional approach turns out to be competitive with Transformer modelsThis work has received financial support from DOMINO project (PGC2018-102041-B-I00, MCIU/AEI/FEDER, UE), eRisk project (RTI2018-093336-B-C21), the Consellería de Cultura, Educación e Ordenación Universitaria (accreditation 2016-2019, ED431G/08, Groups of Reference: ED431C 2020/21, and ERDF 2014-2020: Call ED431G 2019/04) and the European Regional Development Fund (ERDF)S

    The Meaning of Syntactic Dependencies

    Get PDF
    This paper discusses the semantic content of syntactic dependencies. We assume that syntactic dependencies play a central role in the process of semantic interpretation. They are defined as selective functions on word denotations. Among their properties, special attention will be paid to their ability to make interpretation co-compositional and incremental. To describe the semantic properties of dependencies, the paper will be focused on two particular linguistic tasks: word sense disambiguation and attachment resolution. The second task will be performed using a strategy based on automatic acquisition from corpora

    Using the Outlier Detection Task to Evaluate Distributional Semantic Models

    Get PDF
    In this article, we define the outlier detection task and use it to compare neural-based word embeddings with transparent count-based distributional representations. Using the English Wikipedia as a text source to train the models, we observed that embeddings outperform count-based representations when their contexts are made up of bag-of-words. However, there are no sharp differences between the two models if the word contexts are defined as syntactic dependencies. In general, syntax-based models tend to perform better than those based on bag-of-words for this specific task. Similar experiments were carried out for Portuguese with similar results. The test datasets we have created for the outlier detection task in English and Portuguese are freely availableThis work was supported by a 2016 BBVA Foundation Grant for Researchers and Cultural Creators and by Project TELEPARES, Ministry of Economy and Competitiveness (FFI2014-51978-C2-1-R). It has received financial support from the Consellería de Cultura, Educación e Ordenación Universitaria (accreditation 2016–2019, ED431G/08) and the European Regional Development Fund (ERDF)S

    The role of syntactic dependencies in compositional distributional semantics

    Get PDF
    This article provides a preliminary semantic framework for Dependency Grammar in which lexical words are semantically defined as contextual distributions (sets of contexts) while syntactic dependencies are compositional operations on word distributions. More precisely, any syntactic dependency uses the contextual distribution of the dependent word to restrict the distribution of the head, and makes use of the contextual distribution of the head to restrict that of the dependent word. The interpretation of composite expressions and sentences, which are analyzed as a tree of binary dependencies, is performed by restricting the contexts of words dependency by dependency in a left-to-right incremental way. Consequently, the meaning of the whole composite expression or sentence is not a single representation, but a list of contextualized senses, namely the restricted distributions of its constituent (lexical) words. We report the results of two large-scale corpus-based experiments on two different natural language processing applications: paraphrasing and compositional translationThis work is funded by Project TELPARES, Ministry of Economy and Competitiveness (FFI2014-51978-C2-1-R), and the program “Ayuda Fundación BBVA a Investigadores y Creadores Culturales 2016”S

    Using comparable corpora to filter bilingual dictionaries generated by transitivity

    Get PDF
    Este artigo propõe um método para a construção de novos dicionários bilíngues a partir de dicionários já existentes e da exploração de corpora comparáveis. Mais concretamente, um novo dicionário para um par de línguas é gerado em duas etapas: primeiro, cruzam-se dicionários bilíngues entre essas línguas e uma terceira intermediária e, segundo, o resultado do cruzamento, que contém um número elevado de traduções espúrias causadas pela ambiguidade das palavras da língua intermediária, filtra-se com apoio em textos de temática comparável nas duas línguas alvo. A qualidade do dicionário derivado é muito alta, próxima dos dicionários construídos manualmente. Descreveremos um caso de estudo onde criaremos um novo dicionário Inglês-Português com mais de 7.000 entradas bilíngues geradas pelo nosso métodoThis article proposes a method for building new bilingual dictionaries from existing ones and the use of comparable corpora. More precisely, a new bilingual dictionary with pairs in two target languages is built in two steps. First, a noisy dictionary is generated by transitivity by crossing two existing dictionaries containing translation pairs in one of the two target languages and an intermediary one. The result of crossing the two existing dictionaries gives rise to a noisy resource because of the ambiguity of words in the intermediary language. Second, odd translation pairs are filtered out by making use of a set of bilingual lexicons automatically extracted from comparable corpora. The quality of the filtered dictionary is very high, close to that of those dictionaries built by lexicographs. We also report a case study where a new, non noisy, English-Portuguese dictionary with more than 7,000 bilingual entries was automatically generatedS

    Comparing Supervised Machine Learning Strategies and Linguistic Features to Search for Very Negative Opinions

    Get PDF
    In this paper, we examine the performance of several classifiers in the process of searching for very negative opinions. More precisely, we do an empirical study that analyzes the influence of three types of linguistic features (n-grams, word embeddings, and polarity lexicons) and their combinations when they are used to feed different supervised machine learning classifiers: Naive Bayes (NB), Decision Tree (DT), and Support Vector Machine (SVM). The experiments we have carried out show that SVM clearly outperforms NB and DT in all datasets by taking into account all features individually as well as their combinationsThis research was funded by project TelePares (MINECO, ref:FFI2014-51978-C2-1-R), and the Consellería de Cultura, Educación e Ordenación Universitaria (accreditation 2016-2019, ED431G/08) and the European Regional Development Fund (ERDF)S

    A lexicon based method to search for extreme opinions

    Get PDF
    Studies in sentiment analysis and opinion mining have been focused on many aspects related to opinions, namely polarity classification by making use of positive, negative or neutral values. However, most studies have overlooked the identification of extreme opinions (most negative and most positive opinions) in spite of their vast significance in many applications. We use an unsupervised approach to search for extreme opinions, which is based on the automatic construction of a new lexicon containing the most negative and most positive wordsS

    Methods on Natural Language Processing for Information Retrieval

    Get PDF
    En este artículo se describe el efecto de la integración de varias técnicas basadas en el procesamiento del lenguaje natural en sistemas de recuperación de información. Se estudiarán, en concreto, métodos de lematización, anotación de categorías morfosintácticas, identificación de nombres propios compuestos y análisis en dependencias. Una evaluación a gran escala con colecciones de documentos en español nos permitirá verificar que la combinación de estas técnicas con otras menos sofisticadas, tales como tokenización y eliminación de palabras gramaticales, contribuye a una mejora significativa de la calidad de los sistemas de recuperaciónIn this article, we describe the way in which different methods based on Natural Language Processing (NLP) can be integrated in Information Retrieval systems. More precisely, we will study NLP strategies such as lemmatization, PoS tagging, named entities recognition, and dependency-based parsing. A large scale evaluation on Spanish documents will be performed. This will allow us to verify whether these strategies combined with less complex NLP techniques (e.g., tokenization and stopwords removal) improve the quality of IR systems. The results reported at the end of the paper show that NLP-based strategies yield significant improvementsS

    Evaluando vectores contextualizados generados a partir de grandes modelos de lenguaje y de estrategias composicionales

    Get PDF
    In this article, we compare contextualized vectors derived from large language models with those generated by means of dependency-based compositional techniques. For this purpose, we make use of a word-in-context similarity task. As all experiments are conducted for the Galician language, we created a new Galician evaluation dataset for this specific semantic task. The results show that compositional vectors derived from syntactic approaches based on selectional preferences are competitive with the contextual embeddings derived from neural-based large language models.En este artículo, comparamos los vectores contextualizados derivados de grandes modelos de lenguaje con los generados mediante técnicas de composición basadas en dependencias sintácticas. Para ello, nos servimos de una tarea de similitud de palabras en contextos controlados. Como se trata de una experimentación orientada a la lengua gallega, creamos un nuevo conjunto de datos de evaluación en gallego para esta tarea semántica específica. Los resultados muestran que los vectores composicionales derivados de enfoques sintácticos basados en restricciones de selección son competitivos con los embeddings contextuales derivados de los modelos de lenguaje de gran tamaño basados en arquitecturas neuronales.This research was funded by the project ”Nós: Galician in the society and economy of artificial intelligence”, agreement between Xunta de Galicia and University of Santiago de Compostela, and grant ED431G2019/04 by the Galician Ministry of Education, University and Professional Training, and the European Regional Development Fund (ERDF/FEDER program), and Groups of Reference: ED431C 2020/21. In addition: Ramón y Cajal grant (RYC2019-028473-I) and Grant ED431F 2021/01 (Galician Government)
    corecore