75 research outputs found

    On the emergent Semantic Web and overlooked issues

    Get PDF
    The emergent Semantic Web, despite being in its infancy, has already received a lotof attention from academia and industry. This resulted in an abundance of prototype systems and discussion most of which are centred around the underlying infrastructure. However, when we critically review the work done to date we realise that there is little discussion with respect to the vision of the Semantic Web. In particular, there is an observed dearth of discussion on how to deliver knowledge sharing in an environment such as the Semantic Web in effective and efficient manners. There are a lot of overlooked issues, associated with agents and trust to hidden assumptions made with respect to knowledge representation and robust reasoning in a distributed environment. These issues could potentially hinder further development if not considered at the early stages of designing Semantic Web systems. In this perspectives paper, we aim to help engineers and practitioners of the Semantic Web by raising awareness of these issues

    Data linkage for querying heterogeneous databases

    Get PDF

    Extracting Knowledge Bases from table-structured Web resources applied to the semantic-based requirements engineering methodology softwiki

    Get PDF
    Projecte realitzat mitjançant programa de mobilitat. UniversitĂ€t Leipzig. FakultĂ€t fĂŒr Mathematik und Informatik Institut fĂŒr Informatik Betriebliche InformationsssystemeOver the last years the use of the Internet by users has evolved drastically from just consulting to publishing, sharing and modifying contents, turning the Internet into a social net in which the possibilities to collaborate and communicate grow every day bigger. A good example are the Wiki systems, which are collaborative, content-focused platforms in which the work of a community is the key to its good performance. Another of the biggest web technology developments of the Internet nowadays is the so-called Semantic Web, a Web in which every piece of data has its context clearly speciïŹed and machines are able to understand it. The OntoWiki project merges both Semantic Web and Wiki technology, enabling the deïŹnition, modiïŹcation and visualization of agile, distributed knowledge engineering scenarios. ProïŹting from the complex extension system of OntoWiki, the SoftWiki platform was born. Thanks to this tool and the associated Agile Requirements Engineering methodology, potentially very large and spatially separate stakeholder groups are able to gather, semantically enrich, classify and aggregate software requirements in an easy manner. Originally created from the desire to import non-semantic requirement data from the Google Code Issues platform to SoftWiki, the CSVLoad extension for OntoWiki enables importing plain data out of CSV table ïŹles into OntoWiki with the help of an administrator-deïŹned RDF semantic template, deïŹned with a modiïŹed subset of the Turtle (N3) language with support of input and mapping values. The use of CSVLoad and the already deïŹned Google Code Issues Template makes importing the requirements of a project hosted in Google Code into SoftWiki (in other words, into a SWORE ontology) very easy. Some platforms permit exporting only a part (or in some cases none) of their information in standard formats like CSV or RDF. Instead they just show their data in HTML documents, which makes creating general, eïŹ€ective plain-to-semantic importing tools an extremely diïŹƒcult (and in some cases impossible) task, forcing developers to build custom-made tools. The Gcode extension is a tool speciïŹcally built to extract additional requirements information from the Google Code Issues platform HTML code and, together with the CSVLoad tool, it turns importing all the requirements information from Google Code Issues into SoftWiki into an easy, automatic process. By comparing both extensions, their input data and features, the advantages of using structured, view-independent data compared to view-representation-embedded data (e.g. data in a HTML document) become clear. But this data needs a next step, the semantic mark-up, so that computers are able to know the context of the information in an expandable, ïŹ‚exible environment

    Automated and foundational verification of low-level programs

    Get PDF
    Formal verification is a promising technique to ensure the reliability of low-level programs like operating systems and hypervisors, since it can show the absence of whole classes of bugs and prevent critical vulnerabilities. However, to realize the full potential of formal verification for real-world low-level programs one has to overcome several challenges, including: (1) dealing with the complexities of realistic models of real-world programming languages; (2) ensuring the trustworthiness of the verification, ideally by providing foundational proofs (i.e., proofs that can be checked by a general-purpose proof assistant); and (3) minimizing the manual effort required for verification by providing a high degree of automation. This dissertation presents multiple projects that advance formal verification along these three axes: RefinedC provides the first approach for verifying C code that combines foundational proofs with a high degree of automation via a novel refinement and ownership type system. Islaris shows how to scale verification of assembly code to realistic models of modern instruction set architectures-in particular, Armv8-A and RISC-V. DimSum develops a decentralized approach for reasoning about programs that consist of components written in multiple different languages (e.g., assembly and C), as is common for low-level programs. RefinedC and Islaris rest on Lithium, a novel proof engine for separation logic that combines automation with foundational proofs.Formale Verifikation ist eine vielversprechende Technik, um die VerlĂ€sslichkeit von grundlegenden Programmen wie Betriebssystemen sicherzustellen. Um das volle Potenzial formaler Verifikation zu realisieren, mĂŒssen jedoch mehrere Herausforderungen gemeistert werden: Erstens muss die KomplexitĂ€t von realistischen Modellen von Programmiersprachen wie C oder Assembler gehandhabt werden. Zweitens muss die VertrauenswĂŒrdigkeit der Verifikation sichergestellt werden, idealerweise durch maschinenĂŒberprĂŒfbare Beweise. Drittens muss die Verifikation automatisiert werden, um den manuellen Aufwand zu minimieren. Diese Dissertation prĂ€sentiert mehrere Projekte, die formale Verifikation entlang dieser Achsen weiterentwickeln: RefinedC ist der erste Ansatz fĂŒr die Verifikation von C Code, der maschinenĂŒberprĂŒfbare Beweise mit einem hohen Grad an Automatisierung vereint. Islaris zeigt, wie die Verifikation von Assembler zu realistischen Modellen von modernen Befehlssatzarchitekturen wie Armv8-A oder RISC-V skaliert werden kann. DimSum entwickelt einen neuen Ansatz fĂŒr die Verifizierung von Programmen, die aus Komponenten in mehreren Programmiersprachen bestehen (z.B., C und Assembler), wie es oft bei grundlegenden Programmen wie Betriebssystemen der Fall ist. RefinedC und Islaris basieren auf Lithium, eine neue Automatisierungstechnik fĂŒr Separationslogik, die maschinenĂŒberprĂŒfbare Beweise und Automatisierung verbindet.This research was supported in part by a Google PhD Fellowship, in part by awards from Android Security's ASPIRE program and from Google Research, and in part by a European Research Council (ERC) Consolidator Grant for the project "RustBelt", funded under the European Union’s Horizon 2020 Framework Programme (grant agreement no. 683289)

    Standpoint Logic: A Logic for Handling Semantic Variability, with Applications to Forestry Information

    Get PDF
    It is widely accepted that most natural language expressions do not have precise universally agreed definitions that fix their meanings. Except in the case of certain technical terminology, humans use terms in a variety of ways that are adapted to different contexts and perspectives. Hence, even when conversation participants share the same vocabulary and agree on fundamental taxonomic relationships (such as subsumption and mutual exclusivity), their view on the specific meaning of terms may differ significantly. Moreover, even individuals themselves may not hold permanent points of view, but rather adopt different semantics depending on the particular features of the situation and what they wish to communicate. In this thesis, we analyse logical and representational aspects of the semantic variability of natural language terms. In particular, we aim to provide a formal language adequate for reasoning in settings where different agents may adopt particular standpoints or perspectives, thereby narrowing the semantic variability of the vague language predicates in different ways. For that purpose, we present standpoint logic, a framework for interpreting languages in the presence of semantic variability. We build on supervaluationist accounts of vagueness, which explain linguistic indeterminacy in terms of a collection of possible interpretations of the terms of the language (precisifications). This is extended by adding the notion of standpoint, which intuitively corresponds to a particular point of view on how to interpret vague terminology, and may be taken by a person or institution in a relevant context. A standpoint is modelled by sets of precisifications compatible with that point of view and does not need to be fully precise. In this way, standpoint logic allows one to articulate finely grained and structured stipulations of the varieties of interpretation that can be given to a vague concept or a set of related concepts and also provides means to express relationships between different systems of interpretation. After the specification of precisifications and standpoints and the consideration of the relevant notions of truth and validity, a multi-modal logic language for describing standpoints is presented. The language includes a modal operator for each standpoint, such that \standb{s}\phi means that a proposition ϕ\phi is unequivocally true according to the standpoint ss --- i.e.\ ϕ\phi is true at all precisifications compatible with ss. We provide the logic with a Kripke semantics and examine the characteristics of its intended models. Furthermore, we prove the soundness, completeness and decidability of standpoint logic with an underlying propositional language, and show that the satisfiability problem is NP-complete. We subsequently illustrate how this language can be used to represent logical properties and connections between alternative partial models of a domain and different accounts of the semantics of terms. As proof of concept, we explore the application of our formal framework to the domain of forestry, and in particular, we focus on the semantic variability of `forest'. In this scenario, the problematic arising of the assignation of different meanings has been repeatedly reported in the literature, and it is especially relevant in the context of the unprecedented scale of publicly available geographic data, where information and databases, even when ostensibly linked to ontologies, may present substantial semantic variation, which obstructs interoperability and confounds knowledge exchange

    30th International Conference on Information Modelling and Knowledge Bases

    Get PDF
    Information modelling is becoming more and more important topic for researchers, designers, and users of information systems. The amount and complexity of information itself, the number of abstraction levels of information, and the size of databases and knowledge bases are continuously growing. Conceptual modelling is one of the sub-areas of information modelling. The aim of this conference is to bring together experts from different areas of computer science and other disciplines, who have a common interest in understanding and solving problems on information modelling and knowledge bases, as well as applying the results of research to practice. We also aim to recognize and study new areas on modelling and knowledge bases to which more attention should be paid. Therefore philosophy and logic, cognitive science, knowledge management, linguistics and management science are relevant areas, too. In the conference, there will be three categories of presentations, i.e. full papers, short papers and position papers

    Technology Directions for the 21st Century

    Get PDF
    The Office of Space Communications (OSC) is tasked by NASA to conduct a planning process to meet NASA's science mission and other communications and data processing requirements. A set of technology trend studies was undertaken by Science Applications International Corporation (SAIC) for OSC to identify quantitative data that can be used to predict performance of electronic equipment in the future to assist in the planning process. Only commercially available, off-the-shelf technology was included. For each technology area considered, the current state of the technology is discussed, future applications that could benefit from use of the technology are identified, and likely future developments of the technology are described. The impact of each technology area on NASA operations is presented together with a discussion of the feasibility and risk associated with its development. An approximate timeline is given for the next 15 to 25 years to indicate the anticipated evolution of capabilities within each of the technology areas considered. This volume contains four chapters: one each on technology trends for database systems, computer software, neural and fuzzy systems, and artificial intelligence. The principal study results are summarized at the beginning of each chapter
    • 

    corecore