1,755 research outputs found

    The consistent representation of scientific knowledge : investigations into the ontology of karyotypes and mitochondria

    Get PDF
    PhD ThesisOntologies are widely used in life sciences to model scienti c knowledge. The engineering of these ontologies is well-studied and there are a variety of methodologies and techniques, some of which have been re-purposed from software engineering methodologies and techniques. However, due to the complex nature of bio-ontologies, they are not resistant to errors and mistakes. This is especially true for more expressive and/or larger ontologies. In order to improve on this issue, we explore a variety of software engineering techniques that were re-purposed in order to aid ontology engineering. This exploration is driven by the construction of two light-weight ontologies, The Mitochondrial Disease Ontology and The Karyotype Ontology. These ontologies have speci c and useful computational goals, as well as providing exemplars for our methodology. This thesis discusses the modelling decisions undertaken as well as the overall success of each ontological model. Due to the added knowledge capture steps required for the mitochondrial knowledge, The Karyotype Ontology is further developed than The Mitochondrial Disease Ontology. Speci cally, this thesis explores the use of a pattern-driven and programmatic approach to bio-medical ontology engineering. During the engineering of our biomedical ontologies, we found many of the components of each model were similar in logical and textual de nitions. This was especially true for The Karyotype Ontology. In software engineering a common technique to avoid replication is to abstract through the use of patterns. Therefore we utilised localised patterns to model these highly repetitive models. There are a variety of possible tools for the encoding of these patterns, but we found ontology development using Graphical User Interface (GUI) tools to be time-consuming due to the necessity of manual GUI interaction when the ontology needed updating. With the development of Tawny- OWL, a programmatic tool for ontology construction, we are able to overcome this issue, with the added bene t of using a single syntax to express both simple and - i - patternised parts of the ontology. Lastly, we brie y discuss how other methodologies and tools from software engineering, namely unit tests, di ng, version control and Continuous Integration (CI) were re-purposed and how they aided the engineering of our two domain ontologies. Together, this knowledge increases our understanding in ontology engineering techniques. By re-purposing software engineering methodologies, we have aided construction, quality and maintainability of two novel ontologies, and have demonstrated their applicability more generally

    Generating operation specifications from UML class diagrams: A model transformation approach

    Get PDF
    One of the more tedious and complex tasks during the specification of conceptual schemas (CSs) is modeling the operations that define the system behavior. This paper aims to simplify this task by providing a method that automatically generates a set of basic operations that complement the static aspects of the CS and suffice to perform all typical life-cycle create/update/delete changes on the population of the elements of the CS. Our method guarantees that the generated operations are executable, i.e. their executions produce a consistent state wrt the most typical structural constraints that can be defined in CSs (e.g. multiplicity constraints). In particular, our method takes as input a CS expressed as a Unified Modeling Language (UML) class diagram (optionally defined using a profile to enrich the specification of associations) and generates an extended version of the CS that includes all necessary operations to start operating the system. If desired, these basic operations can be later used as building blocks for creating more complex ones. We show the formalization and implementation of our method by means of model-to-model transformations. Our approach is particularly relevant in the context of Model Driven Development approaches. © 2011 Elsevier B.V. All rights reserved.The authors want to thank the anonymous referees of this journal for their interesting suggestions. This work has been partly supported by the MICINN under projects TIN2008-00444, Grupo Consolidado and TIN2010-18011, and by the Generalitat Valenciana under the project OKA PROMETEO/2009/015, and co-financed with the European Regional Development Fund.Albert Albiol, M.; Cabot Sagrera, J.; Gómez Seoane, C.; Pelechano Ferragud, V. (2011). Generating operation specifications from UML class diagrams: A model transformation approach. Data and Knowledge Engineering. 70(4):365-389. https://doi.org/10.1016/j.datak.2011.01.003S36538970

    Ontological Model for Xhosa Beadwork in Marginalised Rural Communities: A Case of the Eastern Cape

    Get PDF
    In South Africa, computational ontologies have gained traction and are increasingly viewed as one of the viable solutions to address the problem of fragmented and unstructured nature of indigenous knowledge (IK) particularly in the marginalized rural communities. The continual existence of IK in tacit form has impeded the use of IK as a potential resource that can catalyze socio-economic and cultural development in South Africa. This study was, therefore, designed to address part of this challenge by developing a Xhosa Beadwork Ontology (XBO) with the goal of structuring the domain knowledge into a reusable body of knowledge. Such a reusable body of knowledge promotes efficient sharing of a common understanding of Xhosa Beadwork in a computational form. The XBO is in OWL 2 DL. The development of the XBO was informed by the NeOn methodology and the iterativeincremental ontology development life cycle within the ambit of Action Research (AR). The XBO was developed around personal ornamentation Xhosa Beadwork consisting of Necklace, Headband, Armlet, Waistband, Bracelet, and Anklet. In this study, the XBO was evaluated focused on ascertaining that the created ontology is a comprehensive representation of the Xhosa Beadwork and is of the required standard. In addition, the XBO was documented into a human understandable and readable resource and was published. The outcome of the study has indicated that the XBO is an adequate, shareable and reusable semantic artifact that can indeed support the formalization and preservation of IK in the domain of Xhosa Beadwor

    The design and implementation of Malaysian indigenous herbs knowledge management system based on ontology model

    Get PDF
    This paper introduces the design and implementation of an Ontology-Based Malaysian Herbal Knowledge Based System (MiHerbs).The ontology model used in this research is based on a previous research with title, ’Malaysia Indigenous Herbs Knowledge Representation’.The research proposed an Ontology based knowledge representation model of Malaysian indigenous herbs using Web Ontology Language (OWL).The model can be used to encode and store knowledge in a “Knowledge Base” such as database, repository or library. The model also can enhance search formulation in information retrieval of herbal knowledge with ease, fast and accurate.However, the back end databases which is based on the OWL language needs to be transformed to relational database format. The transformation from OWL to Relational database is based on the OWL2DB algorithm guideline will be further discussed in this research.Assisted by the System Development Life Cycle (SDLC) methodology, MiHerbs is expected to help herbal research agencies, private sector and government to store and share their herbal related information via the system to provide an ease information access to public or even people around the world

    Early aspects: aspect-oriented requirements engineering and architecture design

    Get PDF
    This paper reports on the third Early Aspects: Aspect-Oriented Requirements Engineering and Architecture Design Workshop, which has been held in Lancaster, UK, on March 21, 2004. The workshop included a presentation session and working sessions in which the particular topics on early aspects were discussed. The primary goal of the workshop was to focus on challenges to defining methodical software development processes for aspects from early on in the software life cycle and explore the potential of proposed methods and techniques to scale up to industrial applications

    Developing techniques for enhancing comprehensibility of controlled medical terminologies

    Get PDF
    A controlled medical terminology (CMT) is a collection of concepts (or terms) that are used in the medical domain. Typically, a CMT also contains attributes of those concepts and/or relationships between those concepts. Electronic CMTs are extremely useful and important for communication between and integration of independent information systems in healthcare, because data in this area is highly fragmented. A single query in this area might involve several databases, e.g., a clinical database, a pharmacy database, a radiology database, and a lab test database. Unfortunately, the extensive sizes of CMTs, often containing tens of thousands of concepts and hundreds of thousands of relationships between pairs of those concepts, impose steep learning curves for new users of such CMTs. In this dissertation, we address the problem of helping a user to orient himself in an existing large CMT. In order to help a user comprehend a large, complex CMT, we need to provide abstract views of the CMT. However, at this time, no tools exist for providing a user with such abstract views. One reason for the lack of tools is the absence of a good theory on how to partition an overwhelming CMT into manageable pieces. In this dissertation, we try to overcome the described problem by using a threepronged approach. (1) We use the power of Object-Oriented Databases to design a schema extraction process for large, complex CMTs. The schema resulting from this process provides an excellent, compact representation of the CMT. (2) We develop a theory and a methodology for partitioning a large OODI3 schema, modeled as a graph, into small meaningful units. The methodology relies on the interaction between a human and a computer, making optimal use of the human\u27s semantic knowledge and the computer\u27s speed. Furthermore, the theory and methodology developed for the scbemalevel partitioning are also adapted to the object-level of a CMT. (3) We use purely structural similarities for partitioning CMTs, eliminating the need for a human expert in the partitioning methodology mentioned above. Two large medical terminologies are used as our test beds, the Medical Entities Dictionary (MED) and the Unified Medical Language System (UMLS), which itself contains a number of terminologies

    Lightweight and static verification of UML executable models

    Get PDF
    Executable models play a key role in many software development methods by facilitating the (semi)automatic implementation/execution of the software system under development. This is possible because executable models promote a complete and fine-grained specification of the system behaviour. In this context, where models are the basis of the whole development process, the quality of the models has a high impact on the final quality of software systems derived from them. Therefore, the existence of methods to verify the correctness of executable models is crucial. Otherwise, the quality of the executable models (and in turn the quality of the final system generated from them) will be compromised. In this paper a lightweight and static verification method to assess the correctness of executable models is proposed. This method allows us to check whether the operations defined as part of the behavioural model are able to be executed without breaking the integrity of the structural model and returns a meaningful feedback that helps repairing the detected inconsistencies.Peer ReviewedPostprint (author's final draft

    Performance-related ontologies and semantic web applications for on-line performance assessment of intelligent systems

    Get PDF
    AbstractSeveral techniques and applications have been proposed to aid the decision taking process in the system performance domain. Most of these techniques have depicted the performance model of systems through annotations of performance measurements coming from specific software descriptive syntactical languages. However, the semantic representation of performance information provides the possibility of its ulterior machine-processable logical interpretation and therefore the applicability of inference rules about a particular domain. Moreover, ontologies ease the interchange and reuse of knowledge of particular domains, e.g. system performance. In this work, we propose a performance ontology together with the system performance analysis technique as an example of framework building for intelligent applications based on semantic web. The paper also shows the construction of performance rules through OWL to automatically infer new performance constraints and QoS knowledge about the system on execution

    Conformance checking in UML artifact-centric business process models

    Get PDF
    Business artifacts have appeared as a new paradigm to capture the information required for the complete execution and reasoning of a business process. Likewise, conformance checking is gaining popularity as a crucial technique that enables evaluating whether recorded executions of a process match its corresponding model. In this paper, conformance checking techniques are incorporated into a general framework to specify business artifacts. By relying on the expressive power of an artifact-centric specification, BAUML, which combines UML state and activity diagrams (among others), the problem of conformance checking can be mapped into the Petri net formalism and its results be explained in terms of the original artifact-centric specification. In contrast to most existing approaches, ours incorporates data constraints into the Petri nets, thus achieving conformance results which are more precise. We have also implemented a plug-in, within the ProM framework, which is able to translate a BAUML into a Petri net to perform conformance checking. This shows the feasibility of our approach.Peer ReviewedPostprint (author's final draft
    corecore