7 research outputs found

    The STEM-ECR Dataset: Grounding Scientific Entity References in STEM Scholarly Content to Authoritative Encyclopedic and Lexicographic Sources

    Get PDF
    We introduce the STEM (Science, Technology, Engineering, and Medicine) Dataset for Scientific Entity Extraction, Classification, and Resolution, version 1.0 (STEM-ECR v1.0). The STEM-ECR v1.0 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction, classification, and resolution tasks in a domain-independent fashion. It comprises abstracts in 10 STEM disciplines that were found to be the most prolific ones on a major publishing platform. We describe the creation of such a multidisciplinary corpus and highlight the obtained findings in terms of the following features: 1) a generic conceptual formalism for scientific entities in a multidisciplinary scientific context; 2) the feasibility of the domain-independent human annotation of scientific entities under such a generic formalism; 3) a performance benchmark obtainable for automatic extraction of multidisciplinary scientific entities using BERT-based neural models; 4) a delineated 3-step entity resolution procedure for human annotation of the scientific entities via encyclopedic entity linking and lexicographic word sense disambiguation; and 5) human evaluations of Babelfy returned encyclopedic links and lexicographic senses for our entities. Our findings cumulatively indicate that human annotation and automatic learning of multidisciplinary scientific concepts as well as their semantic disambiguation in a wide-ranging setting as STEM is reasonable.Comment: Published in LREC 2020. Publication URL https://www.aclweb.org/anthology/2020.lrec-1.268/; Dataset DOI https://doi.org/10.25835/001754

    Unifying context with labeled property graph: A pipeline-based system for comprehensive text representation in NLP

    Get PDF
    Extracting valuable insights from vast amounts of unstructured digital text presents significant challenges across diverse domains. This research addresses this challenge by proposing a novel pipeline-based system that generates domain-agnostic and task-agnostic text representations. The proposed approach leverages labeled property graphs (LPG) to encode contextual information, facilitating the integration of diverse linguistic elements into a unified representation. The proposed system enables efficient graph-based querying and manipulation by addressing the crucial aspect of comprehensive context modeling and fine-grained semantics. The effectiveness of the proposed system is demonstrated through the implementation of NLP components that operate on LPG-based representations. Additionally, the proposed approach introduces specialized patterns and algorithms to enhance specific NLP tasks, including nominal mention detection, named entity disambiguation, event enrichments, event participant detection, and temporal link detection. The evaluation of the proposed approach, using the MEANTIME corpus comprising manually annotated documents, provides encouraging results and valuable insights into the system\u27s strengths. The proposed pipeline-based framework serves as a solid foundation for future research, aiming to refine and optimize LPG-based graph structures to generate comprehensive and semantically rich text representations, addressing the challenges associated with efficient information extraction and analysis in NLP

    KD SENSO-MERGER: An architecture for semantic integration of heterogeneous data

    Get PDF
    This paper presents KD SENSO-MERGER, a novel Knowledge Discovery (KD) architecture that is capable of semantically integrating heterogeneous data from various sources of structured and unstructured data (i.e. geolocations, demographic, socio-economic, user reviews, and comments). This goal drives the main design approach of the architecture. It works by building internal representations that adapt and merge knowledge across multiple domains, ensuring that the knowledge base is continuously updated. To deal with the challenge of integrating heterogeneous data, this proposal puts forward the corresponding solutions: (i) knowledge extraction, addressed via a plugin-based architecture of knowledge sensors; (ii) data integrity, tackled by an architecture designed to deal with uncertain or noisy information; (iii) scalability, this is also supported by the plugin-based architecture as only relevant knowledge to the scenario is integrated by switching-off non-relevant sensors. Also, we minimize the expert knowledge required, which may pose a bottleneck when integrating a fast-paced stream of new sources. As proof of concept, we developed a case study that deploys the architecture to integrate population census and economic data, municipal cartography, and Google Reviews to analyze the socio-economic contexts of educational institutions. The knowledge discovered enables us to answer questions that are not possible through individual sources. Thus, companies or public entities can discover patterns of behavior or relationships that would otherwise not be visible and this would allow extracting valuable information for the decision-making process.This research is supported by the University of Alicante, Spain, the Spanish Ministry of Science and Innovation, the Generalitat Valenciana, Spain, and the European Regional Development Fund (ERDF) through the following funding: At the national level, the following projects were granted: TRIVIAL (PID2021-122263OB-C22); and CORTEX (PID2021-123956OB-I00), funded by MCIN/AEI/10.13039/501100011033 and, as appropriate, by ‘‘ERDF A way of making Europe’’, by the ‘‘European Union’’ or by the ‘‘European Union NextGenerationEU/PRTR’’. At regional level, the Generalitat Valenciana (Conselleria d’Educacio, Investigacio, Cultura i Esport), Spain, granted funding for NL4DISMIS (CIPROM/2021/21)

    Human Language Technologies: Key Issues for Representing Knowledge from Textual Information

    Get PDF
    Ontologies are appropriate structures for capturing and representing the knowledge about a domain or task. However, the design and further population of them are both difficult tasks, normally addressed in a manual or in a semi-automatic manner. The goal of this article is to de_ne and extend a task-oriented ontology schema that semantically represents the information contained in texts. This information can be extracted using Human Language Technologies, and throughout this work, the whole process to design such ontology schema is described. Then, we also describe an algorithm to automatically populate ontologies based our Human Language Technology oriented schema, avoiding the unnecessary duplication of instances, and having as a result the required information in a more compact and useful format ready to exploit. Tangible results are provided, such as permanent online access points to the ontology schema, an example bucket (i.e. ontology instance repository) based on a real scenario, and a documentation Web page.This research work has been partially funded by the University of Alicante, Generalitat Valenciana, Spanish Government, Ministerio de Educación, Cultura y Deporte and Ayudas Fundación BBVA a equipos de investigación científica 2016 through the projects TIN2015-65100-R, TIN2015-65136-C2-2-R, PROMETEU/2018/089,“Plataforma inteligente para recuperación, análisis y representación de la información generada por usuarios en Internet” (GRE16-01) and “Análisis de Sentimientos Aplicado a la Prevención del Suicidio en las Redes Sociales” (ASAP)

    Generic semantics-based task-oriented dialogue system framework for human-machine interaction in industrial scenarios

    Get PDF
    285 p.En Industria 5.0, los trabajadores y su bienestar son cruciales en el proceso de producción. En estecontexto, los sistemas de diálogo orientados a tareas permiten que los operarios deleguen las tareas mássencillas a los sistemas industriales mientras trabajan en otras más complejas. Además, la posibilidad deinteractuar de forma natural con estos sistemas reduce la carga cognitiva para usarlos y genera aceptaciónpor parte de los usuarios. Sin embargo, la mayoría de las soluciones existentes no permiten unacomunicación natural, y las técnicas actuales para obtener dichos sistemas necesitan grandes cantidadesde datos para ser entrenados, que son escasos en este tipo de escenarios. Esto provoca que los sistemas dediálogo orientados a tareas en el ámbito industrial sean muy específicos, lo que limita su capacidad de sermodificados o reutilizados en otros escenarios, tareas que están ligadas a un gran esfuerzo en términos detiempo y costes. Dados estos retos, en esta tesis se combinan Tecnologías de la Web Semántica contécnicas de Procesamiento del Lenguaje Natural para desarrollar KIDE4I, un sistema de diálogo orientadoa tareas semántico para entornos industriales que permite una comunicación natural entre humanos ysistemas industriales. Los módulos de KIDE4I están diseñados para ser genéricos para una sencillaadaptación a nuevos casos de uso. La ontología modular TODO es el núcleo de KIDE4I, y se encarga demodelar el dominio y el proceso de diálogo, además de almacenar las trazas generadas. KIDE4I se haimplementado y adaptado para su uso en cuatro casos de uso industriales, demostrando que el proceso deadaptación para ello no es complejo y se beneficia del uso de recursos

    Frame-Based Ontology Population with PIKES

    No full text
    We present an approach for ontology population from natural language English texts that extracts RDF triples according to FrameBase, a Semantic Web ontology derived from FrameNet. Processing is decoupled in two independently-tunable phases. First, text is processed by several NLP tasks, including Semantic Role Labeling (SRL), whose results are integrated in an RDF graph of mentions, i.e., snippets of text denoting some entity/fact. Then, the mention graph is processed with SPARQL-like rules using a specifically created mapping resource fromNomBank/PropBank/FrameNet annotations to FrameBase concepts, producing a knowledge graph whose content is linked to DBpedia and organized around semantic frames, i.e., prototypical descriptions of events and situations. A single RDF/OWL representation is used where each triple is related to the mentions/tools it comes from. We implemented the approachin PIKES, an open source tool that combines two complementary SRL systems and provides a working online demo. We evaluated PIKES on a manually annotated gold standard, assessing precision/recall in (i) populating FrameBase ontology, and (ii) extracting semantic frames modeled after standard predicate models, for comparison with state-of-the-art tools for the Semantic Web. We also evaluated (iii) sampled precision and execution times on a large corpus of 110 K Wikipedia-like pages
    corecore