50 research outputs found

    Semantic and Knowledge Engineering Using ENVRI RM

    Get PDF
    The ENVRI Reference Model provides architects and engineers with the means to describe the architecture and operational behaviour of environmental and Earth science research infrastructures (RIs) in a standardised way using the standard terminology. This terminology and the relationships between specific classes of concept can be used as the basis for the machine-actionable specification of RIs or RI subsystems. Open Information Linking for Environmental RIs (OIL-E) is a framework for capturing architectural and design knowledge about environmental and Earth science RIs intended to help harmonise vocabulary, promote collaboration and identify common standards and technologies across different research infrastructure initiatives. At its heart is an ontology derived from the ENVRI Reference Model. Using this ontology, RI descriptions can be published as linked data, allowing discovery, querying and comparison using established Semantic Web technologies. It can also be used as an upper ontology by which to connect descriptions of RI entities (whether they be datasets, equipment, processes, etc.) that use other, more specific terminologies. The ENVRI Knowledge Base uses OIL-E to capture information about environmental and Earth science RIs in the ENVRI community for query and comparison. The Knowledge Base can be used to identify the technologies and standards used for particular activities and services and as a basis for evaluating research infrastructure subsystems and behaviours against certain criteria, such as compliance with the FAIR data principles

    Message Passing for Complex Question Answering over Knowledge Graphs

    Get PDF
    Question answering over knowledge graphs (KGQA) has evolved from simple single-fact questions to complex questions that require graph traversal and aggregation. We propose a novel approach for complex KGQA that uses unsupervised message passing, which propagates confidence scores obtained by parsing an input question and matching terms in the knowledge graph to a set of possible answers. First, we identify entity, relationship, and class names mentioned in a natural language question, and map these to their counterparts in the graph. Then, the confidence scores of these mappings propagate through the graph structure to locate the answer entities. Finally, these are aggregated depending on the identified question type. This approach can be efficiently implemented as a series of sparse matrix multiplications mimicking joins over small local subgraphs. Our evaluation results show that the proposed approach outperforms the state-of-the-art on the LC-QuAD benchmark. Moreover, we show that the performance of the approach depends only on the quality of the question interpretation results, i.e., given a correct relevance score distribution, our approach always produces a correct answer ranking. Our error analysis reveals correct answers missing from the benchmark dataset and inconsistencies in the DBpedia knowledge graph. Finally, we provide a comprehensive evaluation of the proposed approach accompanied with an ablation study and an error analysis, which showcase the pitfalls for each of the question answering components in more detail.Comment: Accepted in CIKM 201

    Towards human-like compositional generalization with neural models

    Get PDF
    The human language system exhibits systematic compositionality: the ability to produce and understand a potentially infinite number of novel linguistic expressions by systematically combining known atomic components. This type of systematic compositionality is central to the human ability to learn from limited data and make compositional generalizations. There has been a long-standing debate whether this systematicity can be captured by connectionist architectures. Recent years have witnessed a resurgence of interest in this problem with the revival of neural networks. In particular, neural sequence-to-sequence models, as a powerful workhorse of natural language processing (NLP), have been successfully applied to various NLP tasks. However, despite widespread adoption, there is mounting evidence that neural sequence-to-sequence models are deficient in compositional generalization. In this thesis, we investigate the problem of how to improve compositional generalization of neural sequence-to-sequence models in pursuit of building systems with human-like systematic compositionality. First, assuming that connectionist architectures are fundamentally incapable of acquiring this systematic compositionality which is, in contrast, an inherent part of symbolic (e.g., grammar-based) systems, we attempt to marry symbolic structure with neural models to combine the best of both worlds. We present a two-stage decoding strategy to augment neural sequence-to-sequence models (connectionist architecture) with semantic tagging (symbolic structure), in which an input utterance is tagged with semantic symbols representing the meaning of individual words. Experimental results demonstrate that our framework improves compositional generation for semantic parsing across datasets and model architectures. Secondly, despite superior compositional generalization, it has not yet been empirically established that symbolic models are appropriate for handling the noise and complexity of natural language, as evidenced by their sub-par performance in practical applications. Therefore, tackling compositional generalization via pure architectural modification has the potential to maintain the robustness and flexibility of neural models required to process real language. We thus attempt to devise a more competent neural model than standard sequence-to-sequence models for compositional generalization. To approach this problem, we design Dangle, a new neural network architecture for sequence-to-sequence modeling to learn more disentangled representations for better compositional generalization compared to the Transformer model. Empirical results on both semantic parsing and machine translation verify that our proposal leads to more disentangled representations and better generalization, outperforming competitive baselines and more specialized techniques. Previously, we assess the proposed model on synthetic benchmarks to isolate compositional generalization. However, real-world settings involve both complex natural language and compositional generalization. We thus move on to apply disentangled sequence-to-sequence models to real-world compositional generalization challenges. Before doing so, we first propose a methodology for identifying compositional patterns in real-world data and create a new machine translation benchmark that better represents practical generalization requirements than existing artificial challenges. Then we introduce two key modifications to Dangle which encourage learning more disentangled representations more efficiently. We evaluate the proposed model on existing real-world benchmarks and the benchmark created in this thesis. Experimental results demonstrate that our new architecture achieves better generalization performance across tasks and datasets and is adept at handling real-world challenges

    Graph Data-Models and Semantic Web Technologies in Scholarly Digital Editing

    Get PDF
    This volume is based on the selected papers presented at the Workshop on Scholarly Digital Editions, Graph Data-Models and Semantic Web Technologies, held at the Uni- versity of Lausanne in June 2019. The Workshop was organized by Elena Spadini (University of Lausanne) and Francesca Tomasi (University of Bologna), and spon- sored by the Swiss National Science Foundation through a Scientific Exchange grant, and by the Centre de recherche sur les lettres romandes of the University of Lausanne. The Workshop comprised two full days of vibrant discussions among the invited speakers, the authors of the selected papers, and other participants.1 The acceptance rate following the open call for papers was around 60%. All authors – both selected and invited speakers – were asked to provide a short paper two months before the Workshop. The authors were then paired up, and each pair exchanged papers. Paired authors prepared questions for one another, which were to be addressed during the talks at the Workshop; in this way, conversations started well before the Workshop itself. After the Workshop, the papers underwent a second round of peer-review before inclusion in this volume. This time, the relevance of the papers was not under discus- sion, but reviewers were asked to appraise specific aspects of each contribution, such as its originality or level of innovation, its methodological accuracy and knowledge of the literature, as well as more formal parameters such as completeness, clarity, and coherence. The bibliography of all of the papers is collected in the public Zotero group library GraphSDE20192, which has been used to generate the reference list for each contribution in this volume. The invited speakers came from a wide range of backgrounds (academic, commer- cial, and research institutions) and represented the different actors involved in the remediation of our cultural heritage in the form of graphs and/or in a semantic web en- vironment. Georg Vogeler (University of Graz) and Ronald Haentjens Dekker (Royal Dutch Academy of Sciences, Humanities Cluster) brought the Digital Humanities research perspective; the work of Hans Cools and Roberta Laura Padlina (University of Basel, National Infrastructure for Editions), as well as of Tobias Schweizer and Sepi- deh Alassi (University of Basel, Digital Humanities Lab), focused on infrastructural challenges and the development of conceptual and software frameworks to support re- searchers’ needs; Michele Pasin’s contribution (Digital Science, Springer Nature) was informed by his experiences in both academic research, and in commercial technology companies that provide services for the scientific community. The Workshop featured not only the papers of the selected authors and of the invited speakers, but also moments of discussion between interested participants. In addition to the common Q&A time, during the second day one entire session was allocated to working groups delving into topics that had emerged during the Workshop. Four working groups were created, with four to seven participants each, and each group presented a short report at the end of the session. Four themes were discussed: enhancing TEI from documents to data; ontologies for the Humanities; tools and infrastructures; and textual criticism. All of these themes are represented in this volume. The Workshop would not have been of such high quality without the support of the members of its scientific committee: Gioele Barabucci, Fabio Ciotti, Claire Clivaz, Marion Rivoal, Greta Franzini, Simon Gabay, Daniel Maggetti, Frederike Neuber, Elena Pierazzo, Davide Picca, Michael Piotrowski, Matteo Romanello, Maïeul Rouquette, Elena Spadini, Francesca Tomasi, Aris Xanthos – and, of course, the support of all the colleagues and administrative staff in Lausanne, who helped the Workshop to become a reality. The final versions of these papers underwent a single-blind peer review process. We want to thank the reviewers: Helena Bermudez Sabel, Arianna Ciula, Marilena Daquino, Richard Hadden, Daniel Jeller, Tiziana Mancinelli, Davide Picca, Michael Piotrowski, Patrick Sahle, Raffaele Viglianti, Joris van Zundert, and others who preferred not to be named personally. Your input enhanced the quality of the volume significantly! It is sad news that Hans Cools passed away during the production of the volume. We are proud to document a recent state of his work and will miss him and his ability to implement the vision of a digital scholarly edition based on graph data-models and semantic web technologies. The production of the volume would not have been possible without the thorough copy-editing and proof reading by Lucy Emmerson and the support of the IDE team, in particular Bernhard Assmann, the TeX-master himself. This volume is sponsored by the University of Bologna and by the University of Lausanne. Bologna, Lausanne, Graz, July 2021 Francesca Tomasi, Elena Spadini, Georg Vogele

    Evolutionary genomics : statistical and computational methods

    Get PDF
    This open access book addresses the challenge of analyzing and understanding the evolutionary dynamics of complex biological systems at the genomic level, and elaborates on some promising strategies that would bring us closer to uncovering of the vital relationships between genotype and phenotype. After a few educational primers, the book continues with sections on sequence homology and alignment, phylogenetic methods to study genome evolution, methodologies for evaluating selective pressures on genomic sequences as well as genomic evolution in light of protein domain architecture and transposable elements, population genomics and other omics, and discussions of current bottlenecks in handling and analyzing genomic data. Written for the highly successful Methods in Molecular Biology series, chapters include the kind of detail and expert implementation advice that lead to the best results. Authoritative and comprehensive, Evolutionary Genomics: Statistical and Computational Methods, Second Edition aims to serve both novices in biology with strong statistics and computational skills, and molecular biologists with a good grasp of standard mathematical concepts, in moving this important field of study forward

    Towards Interoperable Research Infrastructures for Environmental and Earth Sciences

    Get PDF
    This open access book summarises the latest developments on data management in the EU H2020 ENVRIplus project, which brought together more than 20 environmental and Earth science research infrastructures into a single community. It provides readers with a systematic overview of the common challenges faced by research infrastructures and how a ‘reference model guided’ engineering approach can be used to achieve greater interoperability among such infrastructures in the environmental and earth sciences. The 20 contributions in this book are structured in 5 parts on the design, development, deployment, operation and use of research infrastructures. Part one provides an overview of the state of the art of research infrastructure and relevant e-Infrastructure technologies, part two discusses the reference model guided engineering approach, the third part presents the software and tools developed for common data management challenges, the fourth part demonstrates the software via several use cases, and the last part discusses the sustainability and future directions

    Evolutionary Genomics

    Get PDF
    This open access book addresses the challenge of analyzing and understanding the evolutionary dynamics of complex biological systems at the genomic level, and elaborates on some promising strategies that would bring us closer to uncovering of the vital relationships between genotype and phenotype. After a few educational primers, the book continues with sections on sequence homology and alignment, phylogenetic methods to study genome evolution, methodologies for evaluating selective pressures on genomic sequences as well as genomic evolution in light of protein domain architecture and transposable elements, population genomics and other omics, and discussions of current bottlenecks in handling and analyzing genomic data. Written for the highly successful Methods in Molecular Biology series, chapters include the kind of detail and expert implementation advice that lead to the best results. Authoritative and comprehensive, Evolutionary Genomics: Statistical and Computational Methods, Second Edition aims to serve both novices in biology with strong statistics and computational skills, and molecular biologists with a good grasp of standard mathematical concepts, in moving this important field of study forward
    corecore