123 research outputs found

    Accommodating prepositional phrases in a highly modular natural language query interface to semantic web triplestores using a novel event-based denotational semantics for English and a set of functional parser combinators

    Get PDF
    The SemanticWeb is an emerging component of the set of technologies that will be known as Web 3.0 in the future. With the large changes it brings to how information is stored and represented to users, there is a need to re-evaluate how this information can be queried. Specifically, there is a need for Natural Language Interfaces that allow users to easily query for information on the Semantic Web. While there has been previous work in this area, existing solutions suffer from the problem that they do not support prepositional phrases in queries (e.g, “in 1958” or “with a key”). To achieve this, we improve on an existing semantics for event-based triplestores that supports prepositional phrases and demonstrate a novel method of handling the word “by”, treating it directly as a preposition in queries. We then show how this new semantics can be integrated with a parser constructed as an executable attribute grammar to create a highly modular and extensible Natural Language Interface to the Semantic Web that supports prepositional phrases in queries

    Scalable, Efficient and Precise Natural Language Processing in the Semantic Web

    Get PDF
    The Internet of Things (IoT) is an emerging phenomenon in the public space. Users with accessibility needs could especially benefit from these “smart” devices if they were able to interact with them through speech. This thesis presents a Compositional Semantics and framework for developing extensible and expressive Natural Language Query Interfaces to the Semantic Web, addressing privacy and auditability needs in the process. This could be particularly useful in healthcare or legal applications, where confidentiality of information is a key concer

    Integrating institutional repositories into the Semantic Web

    Get PDF
    The Web has changed the face of scientific communication; and the Semantic Web promises new ways of adding value to research material by making it more accessible to automatic discovery, linking, and analysis. Institutional repositories contain a wealth of information which could benefit from the application of this technology. In this thesis I describe the problems inherent in the informality of traditional repository metadata, and propose a data model based on the Semantic Web which will support more efficient use of this data, with the aim of streamlining scientific communication and promoting efficient use of institutional research output

    Semantic representation of engineering knowledge:pre-study

    Get PDF

    Learning from visualizing and Interacting with the Semantic Web Dog Food

    No full text
    International audienceSemantic Web conferences such as WWW and ISWC fos- tered a collaborative e ort for the leveraging of Linked Data about con- ferences people, papers and talks. This e ort gave birth to the Semantic Web Conference Corpus, a.k.a. the Semantic Web Dog Food Corpus. Many other conferences and journals contributed afterwards to this cor- pus, so that it is today a representative semantic data archive about our research community activities and progression. These metadata are con- sistent with Linked Data principles and therefore can be semantically processed by the machine. Although it is a matchless source of scienti c knowledge for our community, it is di cult for the researcher, as a hu- man, to browse this corpus that contains more than 180k unique triples. This paper presents our e ort to bring a user-friendly Web application based on the Semantic Web Dog Food corpus that show the topics trends in Semantic Web research. The application was made freely available to the researcher as an end user. In this work we identify speci c issues and barriers encountered when building the system, discuss how these were approached in this software, and how the lessons learnt can drive future implementations fostering the Web of Data

    Training and hackathon on building biodiversity knowledge graphs

    Get PDF
    Knowledge graphs have the potential to unite disconnected digitized biodiversity data, and there are a number of efforts underway to build biodiversity knowledge graphs. More generally, the recent popularity of knowledge graphs, driven in part by the advent and success of the Google Knowledge Graph, has breathed life into the ongoing development of semantic web infrastructure and prototypes in the biodiversity informatics community. We describe a one week training event and hackathon that focused on applying three specific knowledge graph technologies – the Neptune graph database; Metaphactory; and Wikidata - to a diverse set of biodiversity use cases. We give an overview of the training, the projects that were advanced throughout the week, and the critical discussions that emerged. We believe that the main barriers towards adoption of biodiversity knowledge graphs are the lack of understanding of knowledge graphs and the lack of adoption of shared unique identifiers. Furthermore, we believe an important advancement in the outlook of knowledge graph development is the emergence of Wikidata as an identifier broker and as a scoping tool. To remedy the current barriers towards biodiversity knowledge graph development, we recommend continued discussions at workshops and at conferences, which we expect to increase awareness and adoption of knowledge graph technologies

    An extensible natural-language query interface to the DBpedia Triple-store

    Get PDF
    DBpedia is a triple-based binary-relational database which contains 3 billion, and counting, facts derived from Wikipedia. Ideally, individuals should be able to access semantic web triple-store data through natural-language queries. Several attempts have been made to create natural-language (NL) query interfaces to DBpedia. However, no one has yet built a wide-coverage natural-language query processor for DBpedia. DBpedia does not currently encode contextual data representing the time, location or other properties of binary-relationships. This means that NL queries cannot contain prepositional phrases such as the phrase in 2004 in the query: what film was directed by Clint Eastwood in 2004 . Existing NL query interfaces to DBpedia cannot handle prepositional phrases; they are also unable to be extended to do so when used with triple-stores other than DBpedia, which can accommodate contextual data. In this thesis, we investigate an alternative approach to querying DBpedia in which NL queries are treated as expressions of the lambda calculus which are evaluated directly with respect to the triple-store using a compositional and extensible denotational semantics of English

    The Application of Advanced Knowledge Technologies for Emergency Reponse

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.Making sense of the current state of an emergency and of the response to it is vital if appropriate decisions are to be made. This task involves the acquisition, interpretation and management of information. In this paper we present an integrated system that applies recent ideas and technologies from the fields of Artificial Intelligence and semantic web research to support sense- and decision-making at the tactical response level, and demonstrate it with reference to a hypothetical large-scale emergency scenario. We offer no end-user evaluation of this system; rather, we intend that it should serve as a visionary demonstration of the potential of these technologies for emergency response

    Workset Creation for Scholarly Analysis: Recommendations and Prototyping Project Reports

    Get PDF
    This document assembles and describes the outcomes of the four prototyping projects undertaken as part of the Workset Creation for Scholarly Analysis (WCSA) research project (2013 – 2015). Each prototyping project team provided its own final report. These reports are assembled together and included in this document. Based on the totality of results reported, the WCSA project team also provide a set of overarching recommendations for HTRC implementation and adoption of research conducted by the Prototyping Project teams. The work described here was made possible through the generous support of The Andrew W. Mellon Foundation (Grant Ref # 21300666).The Andrew W. Mellon Foundation (Grant Ref # 21300666)Ope
    • 

    corecore