174 research outputs found

    Boosting terminology extraction through crosslingual resources

    Get PDF
    Terminology Extraction is an important Natural Language Processing task with multiple applications in many areas. The task has been approached from different points of view using different techniques. Language and domain independent systems have been proposed as well. Our contribution in this paper focuses on the improvements on Terminology Extraction using crosslingual resources and specifically the Wikipedia and on the use of a variant of PageRank for scoring the candidate terms. // La extracción de terminología es una tarea de procesamiento de la lengua sumamente importante y aplicable en numerosas áreas. La tarea se ha abordado desde múltiples perspectivas y utilizando técnicas diversas. También se han propuesto sistemas independientes de la lengua y del dominio. La contribución de este artículo se centra en las mejoras que los sistemas de extracción de terminología pueden lograr utilizando recursos translingües, y concretamente la Wikipedia y en el uso de una variante de PageRank para valorar los candidatos a términoPeer ReviewedPostprint (published version

    TALP-UPC at MediaEval 2014 Placing Task: Combining geographical knowledge bases and language models for large-scale textual georeferencing

    Get PDF
    This paper describes our Georeferencing approaches, experiments, and results at the MediaEval 2014 Placing Task evaluation. The task consists of predicting the most probable geographical coordinates of Flickr images and videos using its visual, audio and metadata associated features. Our approaches used only Flickr users textual metadata annotations and tagsets. We used four approaches for this task: 1) an approach based on Geographical Knowledge Bases (GeoKB), 2) the Hiemstra Language Model (HLM) approach with Re-Ranking, 3) a combination of the GeoKB and the HLM (GeoFusion). 4) a combination of the GeoFusion with a HLM model derived from the English Wikipedia georeferenced pages. The HLM approach with Re-Ranking showed the best performance within 10m to 1km distances. The GeoFusion approaches achieved the best results within the margin of errors from 10km to 5000km. This work has been supported by the Spanish Research Department (SKATER Project: TIN2012-38584-C06-01). TALP Research Center is recognized as a Quality Research Group (2014 SGR 1338) by AGAUR, the Research Department of the Catalan Government.Peer ReviewedPostprint (published version

    Spanish named entity recognition in the biomedical domain

    Get PDF
    Named Entity Recognition in the clinical domain and in languages different from English has the difficulty of the absence of complete dictionaries, the informality of texts, the polysemy of terms, the lack of accordance in the boundaries of an entity, the scarcity of corpora and of other resources available. We present a Named Entity Recognition method for poorly resourced languages. The method was tested with Spanish radiology reports and compared with a conditional random fields system.Peer ReviewedPostprint (author's final draft

    Mejora de la extracción de terminología usando recursos translingües

    Get PDF
    Terminology Extraction is an important Natural Language Processing task with multiple applications in many areas. The task has been approached from different points of view using different techniques. Language and domain independent systems have been proposed as well. Our contribution in this paper focuses on the improvements on Terminology Extraction using crosslingual resources and specifically the Wikipedia and on the use of a variant of PageRank for scoring the candidate terms.La extracción de terminología es una tarea de procesamiento de la lengua sumamente importante y aplicable en numerosas áreas. La tarea se ha abordado desde múltiples perspectivas y utilizando técnicas diversas. También se han propuesto sistemas independientes de la lengua y del dominio. La contribución de este artículo se centra en las mejoras que los sistemas de extracción de terminología pueden lograr utilizando recursos translingües, y concretamente la Wikipedia y en el uso de una variante de PageRank para valorar los candidatos a término.The research described in this article has been partially funded by Spanish MINECO in the framework of project SKATER: Scenario Knowledge Acquisition by Textual Reading (TIN2012-38584-C06-01)

    A Machine learning approach to POS tagging

    Get PDF
    We have applied inductive learning of statistical decision trees and relaxation labelling to the Natural Language Processing (NLP) task of morphosyntactic disambiguation (Part Of Speech Tagging). The learning process is supervised and obtains a language model oriented to resolve POS ambiguities. This model consists of a set of statistical decision trees expressing distribution of tags and words in some relevant contexts. The acquired language models are complete enough to be directly used as sets of POS disambiguation rules, and include more complex contextual information than simple collections of n-grams usually used in statistical taggers. We have implemented a quite simple and fast tagger that has been tested and evaluated on the Wall Street Journal (WSJ) corpus with a remarkable accuracy. However, better results can be obtained by translating the trees into rules to feed a flexible relaxation labelling based tagger. In this direction we describe a tagger which is able to use information of any kind (n-grams, automatically acquired constraints, linguistically motivated manually written constraints, etc.), and in particular to incorporate the machine learned decision trees. Simultaneously, we address the problem of tagging when only small training material is available, which is crucial in any process of constructing, from scratch, an annotated corpus. We show that quite high accuracy can be achieved with our system in this situation.Postprint (published version

    TASS2018: Medical knowledge discovery by combining terminology extraction techniques with machine learning classification

    Get PDF
    En este artículo presentamos la aproximación seguida por el equipo UPF-UPC en la tarea TASS 2018 Task 3 challenge. Nuestra aproximación puede calificarse, de acuerdo a los códigos propuestos por la organización, como H-KBS, ya que utiliza métodos basados en conocimiento y aprendizaje supervisado. El pipeline utilizado incluye: i) Un pre-proceso standard de los documentos usando Freeling (etiquetado morfosintáctico y análisis de dependencias); ii) El uso de una herramienta de etiquetado sequencial basada en CRF para completar las subtareas A (identificación de frases) y B (clasificación de frases), y iii) El abordaje de la subtarea C (extracción de relaciones semánticas) usando una aproximación híbrida que integra dos classificadores basados en Regresión Logística, y dos extractores léxicos para pares entity/entity y relaciones is-a y same-as.In this paper we present the procedure followed to complete the run submitted by the UPF-UPC team to the TASS 2018 Task 3 challenge. Such procedure may be classified, according the organization’s codes, as H-KB-S as it takes profit from a knowledge based methodology as well as some supervised methods. Our pipeline includes: i) A standard pre-process of the documents using Freeling tool suite (POS tagging and dependency parsing); ii) Use of a CRF sequence labelling tool for completing both subtasks A (key phrase identification) and B (key phrase classification), and iii) Facing the subtask C (setting semantic relationships) by using a hybrid approach that uses two Logistic Regression classifiers, followed by lexical shallow relation extractors for entity/entity pairs related by is-a and same-as relations.Peer ReviewedPostprint (published version

    Topic modeling for entity linking using keyphrase

    Get PDF
    This paper proposes an Entity Linking system that applies a topic modeling ranking. We apply a novel approach in order to provide new relevant elements to the model. These elements are keyphrases related to the queries and gathered from a huge Wikipedia-based knowledge resourcePeer ReviewedPostprint (author’s final draft

    Georeferencing textual annotations and tagsets with geographical knowledge and language models

    Get PDF
    Presentamos en este artículo cuatro aproximaciones al georeferenciado genérico de anotaciones textuales multilingües y etiquetas sem ánticas. Las cuatro aproximaciones se basan en el uso de 1) Conocimiento geogr áfi co, 2) Modelos del lenguaje (LM), 3) Modelos del lenguaje con predicciones re-ranking y 4) Fusi ón de las predicciones basadas en conocimiento geográfi co con otras aproximaciones. Los recursos empleados incluyen el gazetteer geogr áfi co Geonames, los modelos de recuperación de informaci ón TFIDF y BM25, el Hiemstra Language Modelling (HLM), listas de stop words para varias lenguas y un diccionario electróonico de la lengua inglesa. Los mejores resultados en precisión del georeferenciado se han obtenido con la aproximación de re-ranking que usa el HLM y con su fusióon con conocimiento geográfi co. Estas estrategias mejoran los mejores resultados de los mejores sistemas participantes en la tarea o cial de georeferenciado en MediaEval 2010. Nuestro mejor resultado obtiene una precisión de 68.53% en la tarea de geoeferenciado hasta 100 Km. This paper describes generic approaches for georeferencing multilingual textual annotations and sets of tags from metadata associated to textual or multimedia content with high precision. We present four approaches based on: 1) Geographical Knowledge, 2) Language Modelling (LM), 3) Language Modelling with Re-Ranking predictions, 4) Fusion of Geographical Knowledge predictions with the other approaches. The resources employed were the Geonames geographical gazetteer, the TFIDF and BM25 Information Retrieval algorithms, the Hiemstra Language Modelling (HLM) algorithm, stopwords lists from several languages, and an electronic English dictionary. The best results in georeferencing accuracy are achieved with the HLM Re-Ranking approach and its fusion with Geographical Knowledge. These strategies outperformed the best results in accuracy reported by the state-of-the art systems that participated at MediaEval 2010 official Placing task. Our best results achieved are 68.53% of accuracy georeferencing up to a distance of 100 Km.Postprint (author’s final draft

    TALP at GikiCLEF 2009

    Get PDF
    This paper describes our experiments in Geographical Information Retrieval with the Wikipedia collection in the context of our participation in the GikiCLEF 2009 Multilingual task in English and Spanish. Our system, called gikiTALP, follows a very simple approach that uses standard Information Retrieval with the Sphinx full-text search engine and some Natural Language Processing techniques without Geographical Knowdledge.Peer ReviewedPostprint (author’s final draft

    TALP at MediaEval 2010 placing task: geographical focus detection of Flickr textual annotations

    Get PDF
    This paper describes our geographical text analysis and geotagging experiments in the context of the Multimedia Placing Task at MediaEval 2010 evaluation. The task consists of predicting the most probable coordinates of Flickr videos. We used a Natural Language Processing approach trying to match geographical place names in the Flickr users textual annotations. The resources employed to deal with this task were the Geonames geographical gazetteer, stopwords lists from several languages, and an electronic English dictionary. We used two geographical focus disambiguation strategies, one based on population heuristics and another that combines geographical knowledge and population heuristics. The second strategy does achieve the best results. Using stopwords lists and the English dictionary as a lter for ambiguous place names also improves the results.Postprint (published version
    • …
    corecore