458 research outputs found

    Quantitative assessment of concept maps for conceptualizing domain ontologies: a case of Quran

    Get PDF
    The use of graphical knowledge representation formalisms with a representational vocabulary agreement of terms of conceptualization of the universe of discourse is a new high potential approach in the ontology engineering and knowledge management context. Initially, concept maps were used in the fields of education and learning. After that, it became popular in other areas due to its flexible and intuitive nature. It was also proven as a useful tool to improve communication in corporate environment. In the field of ontologies, concept maps were explored to be used to facilitate different aspects of ontology development. An essential reason behind this motivation is the structural resemblance of concept maps with the hierarchical structure of ontologies. This research aims to demonstrate quantitative evaluation of 4 different hypotheses related to the effectiveness of using concept maps for ontology conceptualization. The domain of Quran was selected for the purpose of this study and it was conducted in collaboration with the experts from the Centre of Quranic Research, Universiti Malaya, Kuala Lumpur, Malaysia. The results of the hypotheses demonstrated that concept mapping was easy to learn and implement for the majority of the participants. Most of them experienced improvement in domain knowledge regarding the vocabularies used to refer to the structure of organization of the Quran, namely Juz, Surah, Ayats, tafsir, Malay translation, English translation, and relationships among these entities. Therefore, concept maps instilled the element of learning through the conceptualization process and provided a platform for participants to resolve conflicting opinions and ambiguities of terms used immediately

    An ontology-based approach towards coupling task and path planning for the simulation of manipulation tasks

    Get PDF
    This work deals with the simulation and the validation of complex manipulation tasks under strong geometric constraints in virtual environments. The targeted applications relate to the industry 4.0 framework; as up-to-date products are more and more integrated and the economic competition increases, industrial companies express the need to validate, from design stage on, not only the static CAD models of their products but also the tasks (e.g., assembly or maintenance) related to their Product Lifecycle Management (PLM). The scientific community looked at this issue from two points of view: - Task planning decomposes a manipulation task to be realized into a sequence of primitive actions (i.e., a task plan) - Path planning computes collision-free trajectories, notably for the manipulated objects. It traditionally uses purely geometric data, which leads to classical limitations (possible high computational processing times, low relevance of the proposed trajectory concerning the task to be performed, or failure); recent works have shown the interest of using higher abstraction level data. Joint task and path planning approaches found in the literature usually perform a classical task planning step, and then check out the feasibility of path planning requests associated with the primitive actions of this task plan. The link between task and path planning has to be improved, notably because of the lack of loopback between the path planning level and the task planning level: - The path planning information used to question the task plan is usually limited to the motion feasibility where richer information such as the relevance or the complexity of the proposed path would be needed; - path planning queries traditionally use purely geometric data and/or “blind” path planning methods (e.g., RRT), and no task-related information is used at the path planning level Our work focuses on using task level information at the path planning level. The path planning algorithm considered is RRT; we chose such a probabilistic algorithm because we consider path planning for the simulation and the validation of complex tasks under strong geometric constraints. We propose an ontology-based approach to use task level information to specify path planning queries for the primitive actions of a task plan. First, we propose an ontology to conceptualize the knowledge about the 3D environment in which the simulated task takes place. The environment where the simulated task takes place is considered as a closed part of 3D Cartesian space cluttered with mobile/fixed obstacles (considered as rigid bodies). It is represented by a digital model relying on a multilayer architecture involving semantic, topologic and geometric data. The originality of the proposed ontology lies in the fact that it conceptualizes heterogeneous knowledge about both the obstacles and the free space models. Second, we exploit this ontology to automatically generate a path planning query associated to each given primitive action of a task plan. Through a reasoning process involving the primitive actions instantiated in the ontology, we are able to infer the start and the goal configurations, as well as task-related geometric constraints. Finally, a multi-level path planner is called to generate the corresponding trajectory. The contributions of this work have been validated by full simulation of several manipulation tasks under strong geometric constraints. The results obtained demonstrate that using task-related information allows better control on the RRT path planning algorithm involved to check the motion feasibility for the primitive actions of a task plan, leading to lower computational time and more relevant trajectories for primitive actions

    A semantic and agent-based approach to support information retrieval, interoperability and multi-lateral viewpoints for heterogeneous environmental databases

    Get PDF
    PhDData stored in individual autonomous databases often needs to be combined and interrelated. For example, in the Inland Water (IW) environment monitoring domain, the spatial and temporal variation of measurements of different water quality indicators stored in different databases are of interest. Data from multiple data sources is more complex to combine when there is a lack of metadata in a computation forin and when the syntax and semantics of the stored data models are heterogeneous. The main types of information retrieval (IR) requirements are query transparency and data harmonisation for data interoperability and support for multiple user views. A combined Semantic Web based and Agent based distributed system framework has been developed to support the above IR requirements. It has been implemented using the Jena ontology and JADE agent toolkits. The semantic part supports the interoperability of autonomous data sources by merging their intensional data, using a Global-As-View or GAV approach, into a global semantic model, represented in DAML+OIL and in OWL. This is used to mediate between different local database views. The agent part provides the semantic services to import, align and parse semantic metadata instances, to support data mediation and to reason about data mappings during alignment. The framework has applied to support information retrieval, interoperability and multi-lateral viewpoints for four European environmental agency databases. An extended GAV approach has been developed and applied to handle queries that can be reformulated over multiple user views of the stored data. This allows users to retrieve data in a conceptualisation that is better suited to them rather than to have to understand the entire detailed global view conceptualisation. User viewpoints are derived from the global ontology or existing viewpoints of it. This has the advantage that it reduces the number of potential conceptualisations and their associated mappings to be more computationally manageable. Whereas an ad hoc framework based upon conventional distributed programming language and a rule framework could be used to support user views and adaptation to user views, a more formal framework has the benefit in that it can support reasoning about the consistency, equivalence, containment and conflict resolution when traversing data models. A preliminary formulation of the formal model has been undertaken and is based upon extending a Datalog type algebra with hierarchical, attribute and instance value operators. These operators can be applied to support compositional mapping and consistency checking of data views. The multiple viewpoint system was implemented as a Java-based application consisting of two sub-systems, one for viewpoint adaptation and management, the other for query processing and query result adjustment

    Verification of knowledge shared across design and manufacture using a foundation ontology

    Get PDF
    Seamless computer-based knowledge sharing between departments of a manufacturing enterprise is useful in preventing unnecessary design revisions. A lack of interoperability between independently developed knowledge bases, however, is a major impediment in the development of a seamless knowledge sharing system. Interoperability, being an ability to overcome semantic and syntactic differences during computer-based knowledge sharing can be enhanced through the use of ontologies. Ontologies in computer science terms are hierarchical structures of knowledge stored in a computer-based knowledge base. Ontologies have been accepted by all as an interoperable medium to provide a non-subjective way of storing and sharing knowledge across diverse domains. Some semantic and syntactic differences, however, still crop up when these ontological knowledge bases are developed independently. A case study in an aerospace components manufacturing company suggests that shape features of a component are perceived differently by the designing and manufacturing departments. These differences cause further misunderstanding and misinterpretation when computer-based knowledge sharing systems are used across the two domains. Foundation or core ontologies can be used to overcome these differences and to ensure a seamless sharing of knowledge. This is because these ontologies provide a common grounding for domain ontologies to be used by individual domains or department. This common grounding can be used by the mediation and knowledge verification systems to authenticate the meaning of knowledge understood across different domains. For this reason, this research proposes a knowledge verification framework for developing a system capable of verifying knowledge between those domain ontologies which are developed out of a common core or foundation ontology. This framework makes use of ontology logic to standardize the way concepts from a foundation and core-concepts ontology are used in domain ontologies and then by using the same principles the knowledge being shared is verified. The Knowledge Frame Language which is based on Common Logic is used for formalizing example ontologies. The ontology editor used for browsing and querying ontologies is the Integrated Ontology Development Environment (IODE) by Highfleet Inc. An ontological product modelling technique is also developed in this research, to test the proposed framework in the scenario of manufacturability analysis. The proposed framework is then validated through a Java API specially developed for this purpose. Real industrial examples experienced during the case study are used for validation

    Crisp, fuzzy, and probabilistic faceted semantic search

    Get PDF
    This dissertation presents contributions to the development of the faceted semantic search (FSS) paradigm. First, two fundamental solutions to FSS, which have been widely used since their development are presented. The first is the projection of search facets from annotation ontologies using logical rules. The second is the logic rule-based generation of recommendation links for search items based on the semantic relations of these items. After presenting these solutions, the rest of the dissertation focuses on solving the following deficiencies of FSS: the lack of capabilities to model uncertainty, the inability to rank search results according to relevance, and the usability problems resulting from naively using annotation ontology concepts as search categories. Two sets of solutions to these problems are presented. First, a fuzzy faceted semantic search (FFSS) framework is developed, which extends the crisp set basis of FSS to fuzzy sets. This framework is based on two main ingredients: First, weighted annotations, which are used to determine the membership degrees of search items in annotation concepts. Second, fuzzy mappings of separate end-user categories onto the annotation concepts. In addition, also a probabilistic faceted semantic search (PFSS) framework was developed, which incorporates weighted annotations, modeling of uncertainty in Semantic Web taxonomies, sophisticated mappings of end-user facets onto annotation ontologies, and the combination of evidence from multiple ranking schemes. These ranking methods were empirically analyzed. According to the preliminary evaluation both ranking methods significantly improve quality of search results compared to crisp FSS. Both also outperformed a currently used heuristical ranking method. However, in the case of FFSS this difference did not reach the level of statistical significance

    Capture and Maintenance of Constraints in Engineering Design

    Get PDF
    The thesis investigates two domains, initially the kite domain and then part of a more demanding Rolls-Royce domain (jet engine design). Four main types of refinement rules that use the associated application conditions and domain ontology to support the maintenance of constraints are proposed. The refinement rules have been implemented in ConEditor and the extended system is known as ConEditor+. With the help of ConEditor+, the thesis demonstrates that an explicit representation of application conditions together with the corresponding constraints and the domain ontology can be used to detect inconsistencies, redundancy, subsumption and fusion, reduce the number of spurious inconsistencies and prevent the identification of inappropriate refinements of redundancy, subsumption and fusion between pairs of constraints.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Alignment Incoherence in Ontology Matching

    Full text link
    Ontology matching is the process of generating alignments between ontologies. An alignment is a set of correspondences. Each correspondence links concepts and properties from one ontology to concepts and properties from another ontology. Obviously, alignments are the key component to enable integration of knowledge bases described by different ontologies. For several reasons, alignments contain often erroneous correspondences. Some of these errors can result in logical conflicts with other correspondences. In such a case the alignment is referred to as an incoherent alignment. The relevance of alignment incoherence and strategies to resolve alignment incoherence are in the center of this thesis. After an introduction to syntax and semantics of ontologies and alignments, the importance of alignment coherence is discussed from different perspectives. On the one hand, it is argued that alignment incoherence always coincides with the incorrectness of correspondences. On the other hand, it is demonstrated that the use of incoherent alignments results in severe problems for different types of applications. The main part of this thesis is concerned with techniques for resolving alignment incoherence, i.e., how to find a coherent subset of an incoherent alignment that has to be preferred over other coherent subsets. The underlying theory is the theory of diagnosis. In particular, two specific types of diagnoses, referred to as local optimal and global optimal diagnosis, are proposed. Computing a diagnosis is for two reasons a challenge. First, it is required to use different types of reasoning techniques to determine that an alignment is incoherent and to find subsets (conflict sets) that cause the incoherence. Second, given a set of conflict sets it is a hard problem to compute a global optimal diagnosis. In this thesis several algorithms are suggested to solve these problems in an efficient way. In the last part of this thesis, the previously developed algorithms are applied to the scenarios of - evaluating alignments by computing their degree of incoherence; - repairing incoherent alignments by computing different types of diagnoses; - selecting a coherent alignment from a rich set of matching hypotheses; - supporting the manual revision of an incoherent alignment. In the course of discussing the experimental results, it becomes clear that it is possible to create a coherent alignment without negative impact on the alignments quality. Moreover, results show that taking alignment incoherence into account has a positive impact on the precision of the alignment and that the proposed approach can help a human to save effort in the revision process
    corecore