420 research outputs found

    An information retrieval approach to ontology mapping

    Get PDF
    In this paper, we present a heuristic mapping method and a prototype mapping system that support the process of semi-automatic ontology mapping for the purpose of improving semantic interoperability in heterogeneous systems. The approach is based on the idea of semantic enrichment, i.e., using instance information of the ontology to enrich the original ontology and calculate similarities between concepts in two ontologies. The functional settings for the mapping system are discussed and the evaluation of the prototype implementation of the approach is reported. \ud \u

    The 3rd DBCLS BioHackathon: improving life science data integration with Semantic Web technologies.

    Get PDF
    BACKGROUND: BioHackathon 2010 was the third in a series of meetings hosted by the Database Center for Life Sciences (DBCLS) in Tokyo, Japan. The overall goal of the BioHackathon series is to improve the quality and accessibility of life science research data on the Web by bringing together representatives from public databases, analytical tool providers, and cyber-infrastructure researchers to jointly tackle important challenges in the area of in silico biological research. RESULTS: The theme of BioHackathon 2010 was the 'Semantic Web', and all attendees gathered with the shared goal of producing Semantic Web data from their respective resources, and/or consuming or interacting those data using their tools and interfaces. We discussed on topics including guidelines for designing semantic data and interoperability of resources. We consequently developed tools and clients for analysis and visualization. CONCLUSION: We provide a meeting report from BioHackathon 2010, in which we describe the discussions, decisions, and breakthroughs made as we moved towards compliance with Semantic Web technologies - from source provider, through middleware, to the end-consumer.RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are

    Collaborative hybrid agent provision of learner needs using ontology based semantic technology

    Get PDF
    © Springer International Publishing AG 2017. This paper describes the use of Intelligent Agents and Ontologies to implement knowledge navigation and learner choice when interacting with complex information locations. The paper is in two parts: the first looks at how Agent Based Semantic Technology can be used to give users a more personalised experience as an individual. The paper then looks to generalise this technology to allow users to work with agents in hybrid group scenarios. In the context of University Learners, the paper outlines how we employ an Ontology of Student Characteristics to personalise information retrieval specifically suited to an individual’s needs. Choice is not a simple “show me your hand and make me a match” but a deliberative artificial intelligence (AI) that uses an ontologically informed agent society to consider the weighted solution paths before choosing the appropriate best. The aim is to enrich the student experience and significantly re-route the student’s journey. The paper uses knowledge-level interoperation of agents to personalise the learning space of students and deliver to them the information and knowledge to suite them best. The aim is to personalise their learning in the presentation/format that is most appropriate for their needs. The paper then generalises this Semantic Technology Framework using shared vocabulary libraries that enable individuals to work in groups with other agents, which might be other people or actually be AIs. The task they undertake is a formal assessment but the interaction mode is one of informal collaboration. Pedagogically this addresses issues of ensuring fairness between students since we can ensure each has the same experience (as provided by the same set of Agents) as each other and an individual mark may be gained. This is achieved by forming a hybrid group of learner and AI Software Agents. Different agent architectures are discussed and a worked example presented. The work here thus aims at fulfilling the student’s needs both in the context of matching their needs but also in allowing them to work in an Agent Based Synthetic Group. This in turn opens us new areas of potential collaborative technology

    Collaborative Semantic Content Management: an Ongoing Case Study for Imaging Applications

    Get PDF
    This paper presents a collaborative solution for knowledge management, implemented as a semantic content management system (CMS) with the purpose of knowledge sharing between users with different backgrounds. The CMS is enriched with semantic annotations, enabling content to be categorized, retrieved and published on the Web thanks to the Linked Open Data (LOD) principle which enables the linking of data inside existing resources using a standardized URI mechanism. Annotations are done collaboratively as a social process. Users with different backgrounds express their knowledge using structured natural language. The user knowledge is captured thanks to an ontologic approach and it can be further transformed into RDF(S) classes and properties. Ontologies are at the heart of our CMS and they naturally co-evolve with their communities of use to provide a new way of knowledge sharing inside the network. The ontology is modeled following the so-called DOGMA (Developing Ontology-Grounded Methods and Applications) paradigm, grounded in natural language. The approach will be demonstrated on a use case concerning the semantic annotation of anatomical data (e.g. medical images).257-26

    Towards an Ontology for Data-driven Discovery of New Materials

    Get PDF
    Materials scientists and nano-technologists are struggling with the challenge of managing the large volumes of multivariate, multidimensional and mixed-media data sets being generated from the experimental, characterisation, testing and post-processing steps associated with their search for new materials. In addition, they need to access large publicly available databases containing: crystallographic structure data; thermodynamic data; phase stability data and ionic conduction data. Materials scientists are demanding data integration tools to enable them to search across these disparate databases and to correlate their experimental data with the public databases, in order to identify new fertile areas for searching. Systematic data integration and analysis tools are required to generate targeted experimental programs that reduce duplication of costly compound preparation, testing and characterisation. This paper presents MatOnto – an extensible ontology, based on the DOLCE upper ontology, that aims to represent structured knowledge about materials, their structure and properties and the processing steps involved in their composition and engineering. The primary aim of MatOnto is to provide a common, extensible model for the exchange, re-use and integration of materials science data and experimentation

    Knowledge formalization in experience feedback processes : an ontology-based approach

    Get PDF
    Because of the current trend of integration and interoperability of industrial systems, their size and complexity continue to grow making it more difficult to analyze, to understand and to solve the problems that happen in their organizations. Continuous improvement methodologies are powerful tools in order to understand and to solve problems, to control the effects of changes and finally to capitalize knowledge about changes and improvements. These tools involve suitably represent knowledge relating to the concerned system. Consequently, knowledge management (KM) is an increasingly important source of competitive advantage for organizations. Particularly, the capitalization and sharing of knowledge resulting from experience feedback are elements which play an essential role in the continuous improvement of industrial activities. In this paper, the contribution deals with semantic interoperability and relates to the structuring and the formalization of an experience feedback (EF) process aiming at transforming information or understanding gained by experience into explicit knowledge. The reuse of such knowledge has proved to have significant impact on achieving themissions of companies. However, the means of describing the knowledge objects of an experience generally remain informal. Based on an experience feedback process model and conceptual graphs, this paper takes domain ontology as a framework for the clarification of explicit knowledge and know-how, the aim of which is to get lessons learned descriptions that are significant, correct and applicable

    Manufacturing systems interoperability in dynamic change environments

    Get PDF
    The benefits of rapid i.e. nearly real time, data and information enabled decision making at all levels of a manufacturing enterprise are clearly documented: the ability to plan accurately, react quickly and even pre-empt situations can save industries billions of dollars in waste. As the pace of industry increases with automation and technology, so the need for accurate, data, information and knowledge increases. As the required pace of information collection, processing and exchange change so to do the challenges of achieving and maintaining interoperability as the systems develop: this thesis focuses on the particular challenge of interoperability between systems defined in different time frames, which may have very different terminology. This thesis is directed to improve the ability to assess the requirement for systems to interoperate, and their suitability to do so, as new systems emerge to support this need for change. In this thesis a novel solution concept is proposed that assesses the requirement and suitability of systems for interoperability. The solution concept provides a mechanism for describing systems consistently and unambiguously, even if they are developed in different timeframes. Having resolved the issue of semantic consistency through time the analysis of the systems against logical rules for system interoperability is then possible. The solution concept uses a Core Concept ontology as the foundation for a multi-level heavyweight ontology. The multiple level ontology allows increasing specificity (to ensure accuracy), while the heavyweight (i.e. computer interpretable) nature provides the semantic and logical, rigour required. A detailed investigation has been conducted to test the solution concept using a suitably dynamic environment: Manufacturing Systems, and in particular the emerging field of Manufacturing Intelligence Systems. A definitive definition for the Manufacturing Intelligence domain, constraining interoperability logic, and a multi-level domain ontology have been defined and used to successfully prove the Solution Concept. Using systems from different timeframes, the Solution concept testing successfully identified systems which needed to interoperate, whether they were suitable for interoperation and provided feedback on the reasons for unsuitability which were validated as correct against real world observations

    Provenance explorer: Customized provenance views using semantic inferencing

    Get PDF
    This paper presents Provenance Explorer, a secure provenance visualization tool, designed to dynamically generate customized views of scientific data provenance that depend on the viewer's requirements and/or access privileges. Using RDF and graph visualizations, it enables scientists to view the data, states and events associated with a scientific workflow in order to understand the scientific methodology and validate the results. Initially the Provenance Explorer presents a simple, coarse-grained view of the scientific process or experiment. However the GUI allows permitted users to expand links between nodes (input states, events and output states) to reveal more fine-grained information about particular sub-events and their inputs and outputs. Access control is implemented using Shibboleth to identify and authenticate users and XACML to define access control policies. The system also provides a platform for publishing scientific results. It enables users to select particular nodes within the visualized workflow and drag-and-drop them into an RDF package for publication or e-learning. The direct relationships between the individual components selected for such packages are inferred by the rule-inference engine

    Provenance Explorer: A Tool for Viewing Provenance Trails and Constructing Scientific Publication Packages

    Get PDF
    This paper presents Provenance Explorer, a secure provenance visualization tool, designed to dynamically generate customized views of scientific data provenance that depend on the viewer's requirements and/or access privileges. Using RDF and graph visualizations, it enables scientists to view the data, states and events associated with a scientific workflow in order to understand the scientific methodology and validate the results. Initially the Provenance Explorer presents a simple, coarse-grained view of the scientific process or experiment. However the GUI allows permitted users to expand links between nodes (input states, events and output states) to reveal more fine-grained information about particular sub-events and their inputs and outputs. Access control is implemented using Shibboleth to identify and authenticate users and XACML to define access control policies. The system also provides a platform for publishing scientific results. It enables users to select particular nodes within the visualized workflow and drag-and-drop them into an RDF package for publication or e-learning. The direct relationships between the individual components selected for such packages are inferred by the rule inference engine
    corecore