671 research outputs found

    Automated construction and analysis of political networks via open government and media sources

    Get PDF
    We present a tool to generate real world political networks from user provided lists of politicians and news sites. Additional output includes visualizations, interactive tools and maps that allow a user to better understand the politicians and their surrounding environments as portrayed by the media. As a case study, we construct a comprehensive list of current Texas politicians, select news sites that convey a spectrum of political viewpoints covering Texas politics, and examine the results. We propose a ”Combined” co-occurrence distance metric to better reflect the relationship between two entities. A topic modeling technique is also proposed as a novel, automated way of labeling communities that exist within a politician’s ”extended” network.Peer ReviewedPostprint (author's final draft

    Enabling automatic provenance-based trust assessment of web content

    Get PDF

    A Survey on Event-based News Narrative Extraction

    Full text link
    Narratives are fundamental to our understanding of the world, providing us with a natural structure for knowledge representation over time. Computational narrative extraction is a subfield of artificial intelligence that makes heavy use of information retrieval and natural language processing techniques. Despite the importance of computational narrative extraction, relatively little scholarly work exists on synthesizing previous research and strategizing future research in the area. In particular, this article focuses on extracting news narratives from an event-centric perspective. Extracting narratives from news data has multiple applications in understanding the evolving information landscape. This survey presents an extensive study of research in the area of event-based news narrative extraction. In particular, we screened over 900 articles that yielded 54 relevant articles. These articles are synthesized and organized by representation model, extraction criteria, and evaluation approaches. Based on the reviewed studies, we identify recent trends, open challenges, and potential research lines.Comment: 37 pages, 3 figures, to be published in the journal ACM CSU

    Recommender system to support comprehensive exploration of large scale scientific datasets

    Get PDF
    Bases de dados de entidades cientĂ­ficas, como compostos quĂ­micos, doenças e objetos astronĂłmicos, tĂȘm crescido em tamanho e complexidade, chegando a milhares de milhĂ”es de itens por base de dados. Os investigadores precisam de ferramentas novas e inovadoras para auxiliar na escolha desses itens. Este trabalho propĂ”e o uso de Sistemas de Recomendação para auxiliar os investigadores a encontrar itens de interesse. Identificamos como um dos maiores desafios para a aplicação de sistemas de recomendação em ĂĄreas cientĂ­ficas a falta de conjuntos de dados padronizados e de acesso aberto com informaçÔes sobre as preferĂȘncias dos utilizadores. Para superar esse desafio, desenvolvemos uma metodologia denominada LIBRETTI - Recomendação Baseada em Literatura de Itens CientĂ­ficos, cujo objetivo Ă© a criação de conjuntos de dados , relacionados com campos cientĂ­ficos. Estes conjuntos de dados sĂŁo criados com base no principal recurso de conhecimento que a CiĂȘncia possui: a literatura cientĂ­fica. A metodologia LIBRETTI permitiu o desenvolvimento de novos algoritmos de recomendação especĂ­ficos para vĂĄrios campos cientĂ­ficos. AlĂ©m do LIBRETTI, as principais contribuiçÔes desta tese sĂŁo conjuntos de dados de recomendação padronizados nas ĂĄreas de Astronomia, QuĂ­mica e SaĂșde (relacionado com a doença COVID-19), um sistema de recomendação semĂąntica hĂ­brido para compostos quĂ­micos em conjuntos de dados de grande escala, uma abordagem hĂ­brida baseada no enriquecimento sequencial (SeEn) para recomendaçÔes sequenciais, um pipeline baseado em semĂąntica de vĂĄrios campos para recomendar entidades biomĂ©dicas relacionadas com a doença COVID-19.Databases for scientific entities, such as chemical compounds, diseases and astronomical objects, are growing in size and complexity, reaching billions of items per database. Researchers need new and innovative tools for assisting the choice of these items. This work proposes the use of Recommender Systems approaches for helping researchers to find items of interest. We identified as one of the major challenges for applying RS in scientific fields the lack of standard and open-access datasets with information about the preferences of the users. To overcome this challenge, we developed a methodology called LIBRETTI - LIterature Based RecommEndaTion of scienTific Items, whose goal is to create datasets related to scientific fields. These datasets are created based on scientific literature, the major resource of knowledge that Science has. LIBRETTI methodology allowed the development and testing of new recommender algorithms specific for each field. Besides LIBRETTI, the main contributions of this thesis are standard and sequence-aware recommendation datasets in the fields of Astronomy, Chemistry, and Health (related to COVID-19 disease), a hybrid semantic recommender system for chemical compounds in large-scale datasets, a hybrid approach based on sequential enrichment (SeEn) for sequence-aware recommendations, a multi-field semantic-based pipeline for recommending biomedical entities related to COVID-19 disease

    Using Knowledge Anchors to Facilitate User Exploration of Data Graphs

    Get PDF
    YesThis paper investigates how to facilitate users’ exploration through data graphs for knowledge expansion. Our work focuses on knowledge utility – increasing users’ domain knowledge while exploring a data graph. We introduce a novel exploration support mechanism underpinned by the subsumption theory of meaningful learning, which postulates that new knowledge is grasped by starting from familiar concepts in the graph which serve as knowledge anchors from where links to new knowledge are made. A core algorithmic component for operationalising the subsumption theory for meaningful learning to generate exploration paths for knowledge expansion is the automatic identification of knowledge anchors in a data graph (KADG). We present several metrics for identifying KADG which are evaluated against familiar concepts in human cognitive structures. A subsumption algorithm that utilises KADG for generating exploration paths for knowledge expansion is presented, and applied in the context of a Semantic data browser in a music domain. The resultant exploration paths are evaluated in a task-driven experimental user study compared to free data graph exploration. The findings show that exploration paths, based on subsumption and using knowledge anchors, lead to significantly higher increase in the users’ conceptual knowledge and better usability than free exploration of data graphs. The work opens a new avenue in semantic data exploration which investigates the link between learning and knowledge exploration. This extends the value of exploration and enables broader applications of data graphs in systems where the end users are not experts in the specific domain

    Intelligent Support for Exploration of Data Graphs

    Get PDF
    This research investigates how to support a user’s exploration through data graphs generated from semantic databases in a way leading to expanding the user’s domain knowledge. To be effective, approaches to facilitate exploration of data graphs should take into account the utility from a user’s point of view. Our work focuses on knowledge utility – how useful exploration paths through a data graph are for expanding the user’s knowledge. The main goal of this research is to design an intelligent support mechanism to direct the user to ‘good’ exploration paths through big data graphs for knowledge expansion. We propose a new exploration support mechanism underpinned by the subsumption theory for meaningful learning, which postulates that new knowledge is grasped by starting from familiar concepts in the graph which serve as knowledge anchors from where links to new knowledge are made. A core algorithmic component for adapting the subsumption theory for generating exploration paths is the automatic identification of Knowledge Anchors in a Data Graph (KADG). Several metrics for identifying KADG and the corresponding algorithms for implementation have been developed and evaluated against human cognitive structures. A subsumption algorithm which utilises KADG for generating exploration paths for knowledge expansion is presented and evaluated in the context of a semantic data browser in a musical instrument domain. The resultant exploration paths are evaluated in a controlled user study to examine whether they increase the users’ knowledge as compared to free exploration. The findings show that exploration paths using knowledge anchors and subsumption lead to significantly higher increase in the users’ conceptual knowledge. The approach can be adopted in applications providing data graph exploration to facilitate learning and sensemaking of layman users who are not fully familiar with the domain presented in the data graph

    Knowledge graph exploration for natural language understanding in web information retrieval

    Full text link
    In this thesis, we study methods to leverage information from fully-structured knowledge bases (KBs), in particular the encyclopedic knowledge graph (KG) DBpedia, for different text-related tasks from the area of information retrieval (IR) and natural language processing (NLP). The key idea is to apply entity linking (EL) methods that identify mentions of KB entities in text, and then exploit the structured information within KGs. Developing entity-centric methods for text understanding using KG exploration is the focus of this work. We aim to show that structured background knowledge is a means for improving performance in different IR and NLP tasks that traditionally only make use of the unstructured text input itself. Thereby, the KB entities mentioned in text act as connection between the unstructured text and the structured KG. We focus in particular on how to best leverage the knowledge as contained in such fully-structured (RDF) KGs like DBpedia with their labeled edges/predicates – which is in contrast to previous work on Wikipedia-based approaches we build upon, which typically relies on unlabeled graphs only. The contribution of this thesis can be structured along its three parts: In Part I, we apply EL and semantify short text snippets with KB entities. While only retrieving types and categories from DBpedia for each entity, we are able to leverage this information to create semantically coherent clusters of text snippets. This pipeline of connecting text to background knowledge via the mentioned entities will be reused in all following chapters. In Part II, we focus on semantic similarity and extend the idea of semantifying text with entities by proposing in Chapter 5 a model that represents whole documents by their entities. In this model, comparing documents semantically with each other is viewed as the task of comparing the semantic relatedness of the respective entities, which we address in Chapter 4. We propose an unsupervised graph weighting schema and show that weighting the DBpedia KG leads to better results on an existing entity ranking dataset. The exploration of weighted KG paths turns out to be also useful when trying to disambiguate the entities from an open information extraction (OIE) system in Chapter 6. With this weighting schema, the integration of KG information for computing semantic document similarity in Chapter 5 becomes the task of comparing the two KG subgraphs with each other, which we address by an approximate subgraph matching. Based on a well-established evaluation dataset for semantic document similarity, we show that our unsupervised method achieves competitive performance similar to other state-of-the-art methods. Our results from this part indicate that KGs can contain helpful background knowledge, in particular when exploring KG paths, but that selecting the relevant parts of the graph is an important yet difficult challenge. In Part III, we shift to the task of relevance ranking and first study in Chapter 7 how to best retrieve KB entities for a given keyword query. Combining again text with KB information, we extract entities from the top-k retrieved, query-specific documents and then link the documents to two different KBs, namely Wikipedia and DBpedia. In a learning-to-rank setting, we study extensively which features from the text, theWikipedia KB, and the DBpedia KG can be helpful for ranking entities with respect to the query. Experimental results on two datasets, which build upon existing TREC document retrieval collections, indicate that the document-based mention frequency of an entity and the Wikipedia-based query-to-entity similarity are both important features for ranking. The KG paths in contrast play only a minor role in this setting, even when integrated with a semantic kernel extension. In Chapter 8, we further extend the integration of query-specific text documents and KG information, by extracting not only entities, but also relations from text. In this exploratory study based on a self-created relevance dataset, we find that not all extracted relations are relevant with respect to the query, but that they often contain information not contained within the DBpedia KG. The main insight from the research presented in this part is that in a query-specific setting, established IR methods for document retrieval provide an important source of information even for entity-centric tasks, and that a close integration of relevant text document and background knowledge is promising. Finally, in the concluding chapter we argue that future research should further address the integration of KG information with entities and relations extracted from (specific) text documents, as their potential seems to be not fully explored yet. The same holds also true for a better KG exploration, which has gained some scientific interest in recent years. It seems to us that both aspects will remain interesting problems in the next years, also because of the growing importance of KGs for web search and knowledge modeling in industry and academia
    • 

    corecore