122,175 research outputs found

    Ontological Foundations for Geographic Information Science

    Get PDF
    We propose as a UCGIS research priority the topic of “Ontological Foundations for Geographic Information.” Under this umbrella we unify several interrelated research subfields, each of which deals with different perspectives on geospatial ontologies and their roles in geographic information science. While each of these subfields could be addressed separately, we believe it is important to address ontological research in a unitary, systematic fashion, embracing conceptual issues concerning what would be required to establish an exhaustive ontology of the geospatial domain, issues relating to the choice of appropriate methods for formalizing ontologies, and considerations regarding the design of ontology-driven information systems. This integrated approach is necessary, because there is a strong dependency between the methods used to specify an ontology, and the conceptual richness, robustness and tractability of the ontology itself. Likewise, information system implementations are needed as testbeds of the usefulness of every aspect of an exhaustive ontology of the geospatial domain. None of the current UCGIS research priorities provides such an integrative perspective, and therefore the topic of “Ontological Foundations for Geographic Information Science” is unique

    On a Java based implementation of ontology evolution processes based on Natural Language Processing

    Get PDF
    An architecture was described Burzagli et al. (2010) that can serve as a basis for the design of a Collective Knowledge Management System. The system can be used to exploit the strengths of collective intelligence and merge the gap that exists among two expressions of web intelligence, i.e., the Semantic Web and Web 2.0. In the architecture, a key component is represented by the Ontology Evolution Manager, made up with an Annotation Engine and a Feed Adapter, which is able to interpret textual contributions that represent human intelligence (such as posts on social networking tools), using automatic learning techniques, and to insert knowledge contained therein in a structure described by an ontology. This opens up interesting scenarios for the collective knowledge management system, which could be used to provide up to date information that describes a given domain of interest, to automatically augment it, thus coping with information evolution and to make information available for browsing and searching by an ontology driven engine. This report describes a Java based implementation of the Ontology Evolution Manager within the above outlined architecture

    The Management of Debris Flow in Disaster Prevention using an Ontology-based Knowledge Management System

    Get PDF
    In recently years, the government, academia and business have applied different information technologies to disaster prevention and diverse web sites have been developed. Although these web sites provide a large number of data about disaster-prevention, they are knowledge poor in nature. Furthermore, disaster-prevention is a knowledge-intensive task and a potential knowledge management system can overcome the shortcoming of knowledge poor. On the other hand, ontology design plays the key role toward designing a successful knowledge management system. In this paper, we introduce a three-stage life cycle for ontology design for supporting the service of disaster prevention of debris flow and propose a framework of an ontology-based knowledge management system with the KAON API environment. In addition, by appealing to the technology of component reuse, the system is developed at lower cost thus knowledge workers can focus on the design of ontology and knowledge objects. The objectives of the proposed system is to facilitate knowledge accumulation, knowledge reuse and dissemination for the management of disaster prevention. This work is expected to enable the promotion of the traditional disaster management of debris flow towards the so-called knowledge-driven decision support services

    Genesis-DB: a database for autonomous laboratory systems

    Get PDF
    Artificial intelligence (AI)-driven laboratory automation - combining robotic labware and autonomous software agents - is a powerful trend in modern biology. We developed Genesis-DB, a database system designed to support AI-driven autonomous laboratories by providing software agents access to large quantities of structured domain information. In addition, we present a new ontology for modeling data and metadata from autonomously performed yeast microchemostat cultivations in the framework of the Genesis robot scientist system. We show an example of how Genesis-DB enables the research life cycle by modeling yeast gene regulation, guiding future hypotheses generation and design of experiments. Genesis-DB supports AI-driven discovery through automated reasoning and its design is portable, generic, and easily extensible to other AI-driven molecular biology laboratory data and beyond

    Healthcare Professional Roles: The Ontology Model for E-Learning

    Get PDF
    The paper aims to present the MEDeLEARN project, an ontology-driven virtual learning environment for Medical Information System training. The current training environment for healthcare professionals in the use of essential medical information systems in a large urban training hospital is based on conventional instructor-led training sessions. Problems arise due to the demanding nature of the hospital working environment, causing training to be cancelled or curtailed. This mode of training delivery is deemed to be inefficient and ineffective, with the danger of serious errors occurring as a consequence. The project investigates whether a virtual learning environment can address the competency gap that exists in the training of healthcare professionals in the use of medical information systems. It explores the role of andragogy (adult learning) in the design and development of reusable SCORM conformant Learning Objects (LOs) for this medical domain. The system architecture of the MEDeLEARN system comprises the competency model (an ontology) and a content repository composed of metadata and learning objects

    Modeling Big Data based Systems through Ontological Trading

    Get PDF
    One of the great challenges the information society faces is dealing with the huge amount of information generated and handled daily on the Internet. Today, progress in Big Data proposals attempt to solve this problem, but there are certain limitations to information search and retrieval due basically to the large volumes handled, the heterogeneity of the information and its dispersion among a multitude of sources. In this article, a formal framework is defined to facilitate the design and development of an Environmental Management Information System which works with an heterogeneous and large amount of data. Nevertheless, this framework can be applied to other information systems that work with Big Data, since it does not depend on the type of data and can be utilized in other domains. The framework is based on an Ontological Web-Trading Model (OntoTrader) which follows Model-Driven Engineering and Ontology-Driven Engineering guidelines to separate the system architecture from its implementation. The proposal is accompanied by a case study, SOLERES-KRS, an Environmental Knowledge Representation System designed and developed using Software Agents and Multi-Agent Systems

    An ontology framework for developing platform-independent knowledge-based engineering systems in the aerospace industry

    Get PDF
    This paper presents the development of a novel knowledge-based engineering (KBE) framework for implementing platform-independent knowledge-enabled product design systems within the aerospace industry. The aim of the KBE framework is to strengthen the structure, reuse and portability of knowledge consumed within KBE systems in view of supporting the cost-effective and long-term preservation of knowledge within such systems. The proposed KBE framework uses an ontology-based approach for semantic knowledge management and adopts a model-driven architecture style from the software engineering discipline. Its phases are mainly (1) Capture knowledge required for KBE system; (2) Ontology model construct of KBE system; (3) Platform-independent model (PIM) technology selection and implementation and (4) Integration of PIM KBE knowledge with computer-aided design system. A rigorous methodology is employed which is comprised of five qualitative phases namely, requirement analysis for the KBE framework, identifying software and ontological engineering elements, integration of both elements, proof of concept prototype demonstrator and finally experts validation. A case study investigating four primitive three-dimensional geometry shapes is used to quantify the applicability of the KBE framework in the aerospace industry. Additionally, experts within the aerospace and software engineering sector validated the strengths/benefits and limitations of the KBE framework. The major benefits of the developed approach are in the reduction of man-hours required for developing KBE systems within the aerospace industry and the maintainability and abstraction of the knowledge required for developing KBE systems. This approach strengthens knowledge reuse and eliminates platform-specific approaches to developing KBE systems ensuring the preservation of KBE knowledge for the long term

    Flexible scientific data management for plant phenomics research

    Get PDF
    In this paper, we expand on the design and implementation of the Phenomics Ontology Driven Data repository [1] (PODD) with respect to the capture, storage and retrieval of data and metadata gen- erated at the High Resolution Plant Phenomics Centre (Canberra, Aus- tralia). PODD is a schema-driven Semantic Web database which uses the Resource Description Framework (RDF) model to store semi-structured information. RDF allows PODD to process information about a range of phenomics experiments without needing to define a universal schema for all of the di ff erent structures. To illustrate the process, exemplar datasets were generated using a medium throughput, high resolution, three-dimensional digitisation system purposely built for studying plant structure and function simultaneously under specific environmental con- ditions. The High Performance Compute (HPC), storage and data collec- tion publication aspects of the workflow and their realisation in CSIRO infrastructure are also discussed along with their relationship to PODD
    • …
    corecore