1,860 research outputs found
Deploying ontologies in software design
In this thesis we will be concerned with the relation between ontologies and software
design. Ontologies are studied in the artificial intelligence community as a means to
explicitly represent standardised domain knowledge in order to enable knowledge sharÂŹ
ing and reuse. We deploy ontologies in software design with emphasis on a traditional
software engineering theme: error detection. In particular, we identify a type of error
that is often difficult to detect: conceptual errors. These are related to the description
of the domain whom which the system will operate. They require subjective knowledge
about correct forms of domain description to detect them. Ontologies provide these
forms of domain description and we are interested in applying them and verify their
correctness(chapter 1). After presenting an in depth analysis of the field of ontologies
and software testing as conceived and implemented by the software engineering and
artificial intelligence communities(chapter 2), we discuss an approach which enabled
us to deploy ontologies in the early phases of software development (i.e., specifications)
in order to detect conceptual errors (chapter 3). This is based on the provision of ontological axioms which are used to verify conformance of specification constructs to
the underpinning ontology. To facilitate the integration of ontology with applications
that adopt it we developed an architecture and built tools to implement this form of
conceptual error check(chapter 4). We apply and evaluate the architecture in a variety
of contexts to identify potential uses (chapter 5). An implication of this method for deÂŹ
ploying ontologies to reason about the correctness of applications is to raise our trust
in the given ontologies. However, when the ontologies themselves are erroneous we
might fail to reveal pernicious discrepancies. To cope with this problem we extended
the architecture to a multi-layer form(chapter 4) which gives us the ability to check the
ontologies themselves for correctness. We apply this multi-layer architecture to capÂŹ
ture errors found in a complex ontologies lattice(chapter 6). We further elaborate on
the weaknesses in ontology evaluation methods and employ a technique stemming from
software engineering, that of experience management, to facilitate ontology testing and
deployment(chapter 7). The work presented in this thesis aims to improve practice in
ontology use and identify areas to which ontologies could be of benefits other than the
advocated ones of knowledge sharing and reuse(chapter 8)
Recommended from our members
Ontology-based information standards development
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Standards may be argued to be important enablers for achieving interoperability as they aim to provide unambiguous specifications for error-free exchange of documents and information. By implication, therefore, it is important to model and represent the concept of a standard in a clear, precise and unambiguous way. Although standards development organisations usually provide guidelines for the process of developing and approving standards, they are usually more concerned with administrative aspect of the process. As a consequence, the state-of-the-art lacks practical support for developing the structure and content of a standard specification. In short, there is no systematic development method currently available: (a) For developing the conceptual model underpinning a standard; and/or (b) to guide a group of stakeholders to develop a standard specification.
Semantic interoperability is considered to be an essential factor for effective interoperation â the ability to achieve semantic interoperability effectively and efficiently being strongly equated with quality by some. Semantics require that the meaning of terms, their relationships and also the restrictions and rules in the standards should be clearly defined in the early stages of standard development and act as a basis for the latter stages. This research proposes that ontology can help standards developers and stakeholders to address the issues of improving conceptual models and providing a robust and shared understanding of the domain. This thesis presents OntoStanD, a comprehensive ontology-based standards development methodology, which utilises the best practices of the existing ontology creation methods.
The potential value of OntoStanD is in providing a comprehensive, clear and unambiguous method for developing robust information standards, which are more test friendly and of higher quality. OntoStanD also facilitates standards conformance testing and change management, impacts interoperability and also assists in improved communication among the standards development team. Last, OntoStanD provides an approach that is repeatable, teachable and potentially general enough for creating any kinds of information standard.Fujitsu Laboratories of Europe Ltd, Google Anitaborg Memorial Scholarshi
Ontology as Product-Service System: Lessons Learned from GO, BFO and DOLCE
This paper defends a view of the Gene Ontology (GO) and of Basic Formal Ontology (BFO) as examples of what the manufacturing industry calls product-service systems. This means that they are products (the ontologies) bundled with a range of ontology services such as updates, training, help desk, and permanent identifiers. The paper argues that GO and BFO are contrasted in this respect with DOLCE, which approximates more closely to a scientific theory or a scientific publication. The paper provides a detailed overview of ontology services and concludes with a discussion of some implications of the product-service system approach for the understanding of the nature of applied ontology. Ontology developer communities are compared in this respect with developers of scientific theories and of standards (such as W3C). For each of these we can ask: what kinds of products do they develop and what kinds of services do they provide for the users of these products
Enhancing the correctness of BPMN models
While some of the OMG's metamodels include a formal specification of well-formedness rules, using OCL, the BPMN metamodel specification only includes those rules in natural language. Although several BPMN tools claim to support, at least partly, the OMG's BPMN specification, we found that the mainstream of BPMN tools do not enforce most of the prescribed BPMN rules. Furthermore, the verification of BPMN process models publicly available showed that a relevant percentage of those BPMN process models fail in complying with the well-formedness rules of the BPMN specification. The enforcement of process model's correctness is relevant for the sake of better quality of process modeling and to attain models amenable of being enacted. In this chapter we propose supplement the BPMN metamodel with well-formedness rules expressed as OCL invariants in order to enforce BPMN models' correctness.info:eu-repo/semantics/acceptedVersio
Recommended from our members
Mining Useful Information from Big Data Models Through Semantic-based Process Modelling and Analysis
Over the past few decades, most of the existing methods for analysing large growing knowledge bases, particularly Big Data, focus on building algorithms and/or technologies to help the knowledge-bases automatically or semi-automatically extend. Indeed, a vast number of such systems that construct the said large knowledge-bases continuously grow, and most often, they do not contain all of the facts about each process instance or elements that can be found within the process base. As a consequence, the resultant process models tend to be vague or missing value datasets. In view of such challenge, the work in this paper demonstrates that a well-designed information retrieval system or the process mining (PM) methods should present the results or discovered patterns in a formal and structured format qua being interpreted as domain knowledge. To this end, the work introduces a process mining approach that supports further enhancement of existing information systems or knowledge-base through the conceptual means of data analysis. In turn, the paper proposes a semantic-based process mining and analysis method, or better still, information retrieval and extraction system - that is capable of detecting patterns or unobserved behaviours within any given knowledge base by making use of the underlying semantics or properties (metadata) that describes the available data. Thus, the proposed approach is grounded on the semantic modelling and process mining techniques. The work illustrates this method using the case study of Learning Process. The goal is to discover user interaction patterns within a learning execution environment and respond by making decisions based on the semantical analysis of the captured users data. Practically, the method applies semantic annotation and ontological representation of the learning process domain data and the resultant models in order to discover patterns automatically by means of semantic reasoning. Theoretically, the process mining and modelling method show that a way of addressing the common challenge with computational intelligent systems or methods is through an effectively well-designed and fit for purpose system that meets the requirements and needs of the intended users. In other words, this paper applies effective reasoning methods to make inferences over a process knowledge-base (e.g. learning process) that leads to an automated discovery of learning patterns or behaviour
a survey
Building ontologies in a collaborative and increasingly community-driven
fashion has become a central paradigm of modern ontology engineering. This
understanding of ontologies and ontology engineering processes is the result
of intensive theoretical and empirical research within the Semantic Web
community, supported by technology developments such as Web 2.0. Over 6 years
after the publication of the first methodology for collaborative ontology
engineering, it is generally acknowledged that, in order to be useful, but
also economically feasible, ontologies should be developed and maintained in a
community-driven manner, with the help of fully-fledged environments providing
dedicated support for collaboration and user participation. Wikis, and similar
communication and collaboration platforms enabling ontology stakeholders to
exchange ideas and discuss modeling decisions are probably the most important
technological components of such environments. In addition, process-driven
methodologies assist the ontology engineering team throughout the ontology
life cycle, and provide empirically grounded best practices and guidelines for
optimizing ontology development results in real-world projects. The goal of
this article is to analyze the state of the art in the field of collaborative
ontology engineering. We will survey several of the most outstanding
methodologies, methods and techniques that have emerged in the last years, and
present the most popular development environments, which can be utilized to
carry out, or facilitate specific activities within the methodologies. A
discussion of the open issues identified concludes the survey and provides a
roadmap for future research and development in this lively and promising
field
FLACOSâ08 Workshop proceedings
The 2nd Workshop on Formal Languages and Analysis of Contract-Oriented Software (FLACOSâ08) is held in Malta. The aim of the workshop is to bring together researchers and practitioners working on language-based solutions to contract-oriented software development. The workshop is partially funded by the Nordunet3 project âCOSoDISâ (Contract-Oriented Software Development for Internet Services) and it attracted 25 participants. The program consists of 4 regular papers and 10 invited participant presentations
Collaborative ontology engineering: a survey
Building ontologies in a collaborative and increasingly community-driven fashion has become a central paradigm of modern ontology engineering. This understanding of ontologies and ontology engineering processes is the result of intensive theoretical and empirical research within the Semantic Web community, supported by technology developments such as Web 2.0. Over 6 years after the publication of the first methodology for collaborative ontology engineering, it is generally acknowledged that, in order to be useful, but also economically feasible, ontologies should be developed and maintained in a community-driven manner, with the help of fully-fledged environments providing dedicated support for collaboration and user participation. Wikis, and similar communication and collaboration platforms enabling ontology stakeholders to exchange ideas and discuss modeling decisions are probably the most important technological components of such environments. In addition, process-driven methodologies assist the ontology engineering team throughout the ontology life cycle, and provide empirically grounded best practices and guidelines for optimizing ontology development results in real-world projects. The goal of this article is to analyze the state of the art in the field of collaborative ontology engineering. We will survey several of the most outstanding methodologies, methods and techniques that have emerged in the last years, and present the most popular development environments, which can be utilized to carry out, or facilitate specific activities within the methodologies. A discussion of the open issues identified concludes the survey and provides a roadmap for future research and development in this lively and promising fiel
Ontology-based information standards development
Standards may be argued to be important enablers for achieving interoperability as they aim to provide unambiguous specifications for error-free exchange of documents and information. By implication, therefore, it is important to model and represent the concept of a standard in a clear, precise and unambiguous way. Although standards development organisations usually provide guidelines for the process of developing and approving standards, they are usually more concerned with administrative aspect of the process. As a consequence, the state-of-the-art lacks practical support for developing the structure and content of a standard specification. In short, there is no systematic development method currently available: (a) For developing the conceptual model underpinning a standard; and/or (b) to guide a group of stakeholders to develop a standard specification. Semantic interoperability is considered to be an essential factor for effective interoperation â the ability to achieve semantic interoperability effectively and efficiently being strongly equated with quality by some. Semantics require that the meaning of terms, their relationships and also the restrictions and rules in the standards should be clearly defined in the early stages of standard development and act as a basis for the latter stages. This research proposes that ontology can help standards developers and stakeholders to address the issues of improving conceptual models and providing a robust and shared understanding of the domain. This thesis presents OntoStanD, a comprehensive ontology-based standards development methodology, which utilises the best practices of the existing ontology creation methods. The potential value of OntoStanD is in providing a comprehensive, clear and unambiguous method for developing robust information standards, which are more test friendly and of higher quality. OntoStanD also facilitates standards conformance testing and change management, impacts interoperability and also assists in improved communication among the standards development team. Last, OntoStanD provides an approach that is repeatable, teachable and potentially general enough for creating any kinds of information standard.EThOS - Electronic Theses Online ServiceFujitsu Laboratories of Europe LtdGoogle Anitaborg Memorial ScholarshipGBUnited Kingdo
- âŠ