775 research outputs found
Formal REA model at operational level
Despite a lot of attention gained by the Resource-Event-Agent (REA) framework among researchers in enterprise modeling, it still lacks comprehensive formal description. Most of the formalization approaches to REA use only UML or other graphical representation. This paper aims to define REA ontology at operational level using formal logic tools. The general approach to formal logic description of REA was motivated by LTAP introduced by Ito, Hagihara and Yonezaki. After basic REA concepts are presented, semantics and logical language LREA are defined including axioms for the REA operational level. Future research is shortly described in conclusion.REA framework; formal models; modal logic
A Detailed Comparison of UML and OWL
As models and ontologies assume an increasingly central role in software and information systems engineering, the question of how exactly they compare and how they can sensibly be used together assumes growing importance. However, no study to date has systematically and comprehensively compared the two technology spaces, and a large variety of different bridging and integration ideas have been proposed in recent years without any detailed analysis of whether they are sound or useful. In this paper, we address this problem by providing a detailed and comprehensive comparison of the two technology spaces in terms of their flagship languages – UML and OWL – each a de facto and de jure standard in its respective space. To fully analyze the end user experience, we perform the comparison at two levels – one considering the underlying boundary assumptions and philosophy adopted by each language and the other considering their detailed features. We also consider all relevant auxiliary languages such as OCL. The resulting comparison clarifies the relationship between the two technologies and provides a solid foundation for deciding how to use them together or integrate them
Kolmogorov Complexity in perspective. Part II: Classification, Information Processing and Duality
We survey diverse approaches to the notion of information: from Shannon
entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov
complexity are presented: randomness and classification. The survey is divided
in two parts published in a same volume. Part II is dedicated to the relation
between logic and information system, within the scope of Kolmogorov
algorithmic information theory. We present a recent application of Kolmogorov
complexity: classification using compression, an idea with provocative
implementation by authors such as Bennett, Vitanyi and Cilibrasi. This stresses
how Kolmogorov complexity, besides being a foundation to randomness, is also
related to classification. Another approach to classification is also
considered: the so-called "Google classification". It uses another original and
attractive idea which is connected to the classification using compression and
to Kolmogorov complexity from a conceptual point of view. We present and unify
these different approaches to classification in terms of Bottom-Up versus
Top-Down operational modes, of which we point the fundamental principles and
the underlying duality. We look at the way these two dual modes are used in
different approaches to information system, particularly the relational model
for database introduced by Codd in the 70's. This allows to point out diverse
forms of a fundamental duality. These operational modes are also reinterpreted
in the context of the comprehension schema of axiomatic set theory ZF. This
leads us to develop how Kolmogorov's complexity is linked to intensionality,
abstraction, classification and information system.Comment: 43 page
Ontology-driven conceptual modeling: A'systematic literature mapping and review
All rights reserved. Ontology-driven conceptual modeling (ODCM) is still a relatively new research domain in the field of information systems and there is still much discussion on how the research in ODCM should be performed and what the focus of this research should be. Therefore, this article aims to critically survey the existing literature in order to assess the kind of research that has been performed over the years, analyze the nature of the research contributions and establish its current state of the art by positioning, evaluating and interpreting relevant research to date that is related to ODCM. To understand and identify any gaps and research opportunities, our literature study is composed of both a systematic mapping study and a systematic review study. The mapping study aims at structuring and classifying the area that is being investigated in order to give a general overview of the research that has been performed in the field. A review study on the other hand is a more thorough and rigorous inquiry and provides recommendations based on the strength of the found evidence. Our results indicate that there are several research gaps that should be addressed and we further composed several research opportunities that are possible areas for future research
An Approach to Conceptual Schema Evolution
In this work we will analyse conceptual foundations of user centric content management. Content management often involves integration of content that was created from different points of view. Current modeling techniques and especially current systems lack of a sufficient support of handling these situations. Although schema integration is undecideable in general, we will introduce a conceptual model together with a modeling and maintenance methodology that simplifies content integration in many practical situations. We will define a conceptual model based on the Higher-Order Entity Relationship Model that combines advantages of schema oriented modeling techniques like ER modeling with element driven paradims like approaches for semistructured data management. This model is ready to support contextual reasoning based on local model semantics. For the special case of schema evolution based on schema versioning we will derive the compatibility relation between local models by tracking dependencies of schema revisions. Additionally, we will discuss implementational facets, such as storage aspects for structurally flexible content or generation of adaptive user interfaces based on a conceptual interaction model
On a Graph-Based Semantics for UML Class and Object Diagrams
In this paper we propose a formal extension of type graphs with notions that are commonplace in the UML and have long proven their worth in that context: namely, inheritance, multiplicity, containment and the like. We believe the absence of a comprehensive and commonly agreed upon formalisation of these notions to be an important and, unfortunately, often ignored omission. Since our eventual aim (shared by many researchers) is to give unambiguous, formal semantics to the UML using the theory of graphs and graph transformation, in this paper we propose a set of definitions to repair this omission. With respect to previous work in this direction, our aim is to arrive at more comprehensive and at the same time simpler definitions.\u
Recommended from our members
A Survey of Top-Level Ontologies - to inform the ontological choices for a Foundation Data Model
The Centre for Digital Built Britain has been tasked through the Digital Framework Task Group to develop an Information Management Framework (IMF) to support the development of a National Digital Twin (NDT) as set out in “The Pathway to an Information Management Framework” (Hetherington, 2020). A key component of the IMF is a Foundation Data Model (FDM),
built upon a top-level ontology (TLO), as a basis for ensuring consistent data across the NDT. This document captures the results collected from a broad survey of top-level ontologies, conducted by the IMF technical team. It focuses on the core ontological choices made in their foundations and
the pragmatic engineering consequences these have on how the ontologies can be applied and further scaled. This document will provide the basis for discussions on a suitable TLO for the FDM. It is also expected that these top-level ontologies will provide a resource whose components can be harvested and adapted for inclusion in the FDM
Subjects, Models, Languages, Transformations
Discussions about model-driven approaches tend to be hampered by terminological confusion. This is at least partially caused by a lack of formal precision in defining the basic concepts, including that of "model" and "thing being modelled" - which we call subject in this paper. We propose a minimal criterion that a model should fulfill: essentially, it should come equipped with a clear and unambiguous membership test; in other words, a notion of which subjects it models. We then go on to discuss a certain class of models of models that we call languages, which apart from defining their own membership test also determine membership of their members. Finally, we introduce transformations on each of these layers: a subject transformation is essentially a pair of subjects, a model transformation is both a pair of models and a model of pairs (namely, subject transformations), and a language transformation is both a pair of languages and a language of model transformations. We argue that our framework has the benefits of formal precision (there can be no doubt about whether something satifies our criteria for being a model, a language or a transformation) and minimality (it is hard to imagine a case of modelling or transformation not having the characterstics that we propose)
- …