130 research outputs found

    Uncertainty in Knowledge Provenance

    Full text link
    osed to address the problem about how to he validity and origin of information/knowledge on the web by rces and dependencies as oduced: Static, Dynamic, Uncertain, and Judgmental. In order to give a formal and explicit specification ncepts of KP, a static KP ontology is defined in this 1. telecommunication technologies that make owledge/information validity becomes a crucial ny des mo s and dependencies, as well as trust stru nswer include: Can this information be ts creator be trusted? What does it dep to be true? This proposed app to determine the val ncertain an

    Knowledge, provenance and psychological explanation

    Get PDF
    Analytic theories of knowledge have traditionally maintained that the provenance of a true belief is critically important to deciding whether it is knowledge. However, a comparably widespread view is that it is our beliefs alone, regardless of their (potentially dubious) provenance which feature in psychological explanation, including the explanation of action: thus, that knowledge itself and as such is irrelevant in psychological explanation. The paper gives initial reasons why the ‘beliefs alone’ view of explanation should be resisted—arguments deriving ultimately from the Meno indicate that the provenance of a true belief may be relevant to the explanation of action. However, closer scrutiny of these arguments shows that they are incapable of according provenance anything like as central a role in action explanation as provenance has traditionally been given in the theory of knowledge. A consideration of the history of science suggests anyway that all knowledge has a compromised provenance if one looks back any significant distance. It is concluded that the importance of the provenance of our beliefs is something that has been seriously over-emphasised in epistemology

    Information provenance for open distributed collaborative system

    Full text link
    In autonomously managed distributed systems for collaboration, provenance can facilitate reuse of information that are interchanged, repetition of successful experiments, or to provide evidence for trust mechanisms that certain information existed at a certain period during collaboration. In this paper, we propose domain independent information provenance architecture for open collaborative distributed systems. The proposed system uses XML for interchanging information and RDF to track information provenance. The use of XML and RDF also ensures that information is universally acceptable even among heterogeneous nodes. Our proposed information provenance model can work on any operating systems or workflows.<br /

    A Conceptual Trust Framework for Semantic Web Agents

    Get PDF

    Structuring and extracting knowledge for the support of hypothesis generation in molecular biology

    Get PDF
    Background: Hypothesis generation in molecular and cellular biology is an empirical process in which knowledge derived from prior experiments is distilled into a comprehensible model. The requirement of automated support is exemplified by the difficulty of considering all relevant facts that are contained in the millions of documents available from PubMed. Semantic Web provides tools for sharing prior knowledge, while information retrieval and information extraction techniques enable its extraction from literature. Their combination makes prior knowledge available for computational analysis and inference. While some tools provide complete solutions that limit the control over the modeling and extraction processes, we seek a methodology that supports control by the experimenter over these critical processes. Results: We describe progress towards automated support for the generation of biomolecular hypotheses. Semantic Web technologies are used to structure and store knowledge, while a workflow extracts knowledge from text. We designed minimal proto-ontologies in OWL for capturing different aspects of a text mining experiment: the biological hypothesis, text and documents, text mining, and workflow provenance. The models fit a methodology that allows focus on the requirements of a single experiment while supporting reuse and posterior analysis of extracted knowledge from multiple experiments. Our workflow is composed of services from the 'Adaptive Information Disclosure Application' (AIDA) toolkit as well as a few others. The output is a semantic model with putative biological relations, with each relation linked to the corresponding evidence. Conclusion: We demonstrated a 'do-it-yourself' approach for structuring and extracting knowledge in the context of experimental research on biomolecular mechanisms. The methodology can be used to bootstrap the construction of semantically rich biological models using the results of knowledge extraction processes. Models specific to particular experiments can be constructed that, in turn, link with other semantic models, creating a web of knowledge that spans experiments. Mapping mechanisms can link to other knowledge resources such as OBO ontologies or SKOS vocabularies. AIDA Web Services can be used to design personalized knowledge extraction procedures. In our example experiment, we found three proteins (NF-Kappa B, p21, and Bax) potentially playing a role in the interplay between nutrients and epigenetic gene regulation

    From Data to Knowledge Graphs: A Multi-Layered Method to Model User's Visual Analytics Workflow for Analytical Purposes

    Full text link
    The importance of knowledge generation drives much of Visual Analytics (VA). User-tracking and behavior graphs have shown the value of understanding users' knowledge generation while performing VA workflows. Works in theoretical models, ontologies, and provenance analysis have greatly described means to structure and understand the connection between knowledge generation and VA workflows. Yet, two concepts are typically intermixed: the temporal aspect, which indicates sequences of events, and the atemporal aspect, which indicates the workflow state space. In works where these concepts are separated, they do not discuss how to analyze the recorded user's knowledge gathering process when compared to the VA workflow itself. This paper presents Visual Analytic Knowledge Graph (VAKG), a conceptual framework that generalizes existing knowledge models and ontologies by focusing on how humans relate to computer processes temporally and how it relates to the workflow's state space. Our proposal structures this relationship as a 4-way temporal knowledge graph with specific emphasis on modeling the human and computer aspect of VA as separate but interconnected graphs for, among others, analytical purposes. We compare VAKG with relevant literature to show that VAKG's contribution allows VA applications to use it as a provenance model and a state space graph, allowing for analytics of domain-specific processes, usage patterns, and users' knowledge gain performance. We also interviewed two domain experts to check, in the wild, whether real practice and our contributions are aligned.Comment: 9 pgs, submitted to VIS 202

    Food for Thought: The St. Paul Farmers\u27 Market\u27s Contribution to a Livable City

    Get PDF

    From Science to e-Science to Semantic e-Science: A Heliosphysics Case Study

    Get PDF
    The past few years have witnessed unparalleled efforts to make scientific data web accessible. The Semantic Web has proven invaluable in this effort; however, much of the literature is devoted to system design, ontology creation, and trials and tribulations of current technologies. In order to fully develop the nascent field of Semantic e-Science we must also evaluate systems in real-world settings. We describe a case study within the field of Heliophysics and provide a comparison of the evolutionary stages of data discovery, from manual to semantically enable. We describe the socio-technical implications of moving toward automated and intelligent data discovery. In doing so, we highlight how this process enhances what is currently being done manually in various scientific disciplines. Our case study illustrates that Semantic e-Science is more than just semantic search. The integration of search with web services, relational databases, and other cyberinfrastructure is a central tenet of our case study and one that we believe has applicability as a generalized research area within Semantic e-Science. This case study illustrates a specific example of the benefits, and limitations, of semantically replicating data discovery. We show examples of significant reductions in time and effort enable by Semantic e-Science; yet, we argue that a "complete" solution requires integrating semantic search with other research areas such as data provenance and web services

    Capturing, Harmonizing and Delivering Data and Quality Provenance

    Get PDF
    Satellite remote sensing data have proven to be vital for various scientific and applications needs. However, the usability of these data depends not only on the data values but also on the ability of data users to assess and understand the quality of these data for various applications and for comparison or inter-usage of data from different sensors and models. In this paper, we describe some aspects of capturing, harmonizing and delivering this information to users in the framework of distributed web-based data tools
    • …
    corecore