123 research outputs found
BPMN: A Meta Model for the Happy Path
Recently, the OMG has been working on developing a new standard for a business process management notation (BPMN). This standard development results in documents that contain the newest approved version of a standard or a standard proposal that can be ammended. It is our vision that such a standard document, that also serves as a specification for BPMN modeling tool developers could benefit from a fact-oriented model in which the same domain knowledge is represented conceptually as a list of concept definitions (including naming conventions), a set of information structure diagrams and the constraints or business rules that govern the instances of the information structure diagrams. In this paper we will show precisely, how such a fact-oriented conceptual view on a standard document can be created, and we will show how a fact-oriented approach can improve the completeness of a specification.management information;
COMPLIANCE TO QUALITY CRITERIA OF EXISTING REQUIREMENTS ELICITATION METHODS
In this article we define a requirements elicitation method based on natural language modelling. We argue that our method complies with synthesized quality criteria for RE methods, and compare this with the compliance of traditional RE methods (EER, ORM, UML). We show limited empirical evidence to support our theoretical argument.computer science applications;
Recommended from our members
Conceptual modelling and the quality of ontologies: A comparison between object-role modelling and the object paradigm
Ontologies are key enablers for sharing precise and machine-understandable semantics among different applications and parties. Yet, for ontologies to meet these expectations, their quality must be of a good standard. The quality of an ontology is strongly based on the design method employed. This paper addresses the design problems related to the modelling of ontologies, with specific concentration on the issues related to the quality of the conceptualisations produced. The paper aims
to demonstrate the impact of the modelling paradigm adopted on the quality of ontological models and, consequently, the potential impact that such a decision can have in relation to the development of
software applications. To this aim, an ontology that is conceptualised based on the Object Role Modelling (ORM) approach is re-engineered into a one modelled on the basis of the Object Paradigm (OP). Next, the two ontologies are analytically compared using the specified criteria. The conducted
comparison highlights that using the OP for ontology conceptualisation can provide more expressive, reusable, objective and temporal ontologies than those conceptualised on the basis of the ORM approach
A knowledge-based e-tutorial system based on the ORM model
At present information technology plays important roles in teaching and learning activities. E-learning systems have the potential to reduce operating costs and train more people.Teachers and students do not have to be in the same place at
the same time and the students have the opportunity to perform self-studies and self-evaluation using e-tutorial systems.E-learning systems could be considered expert systems in the sense that they provide expert advice in particular subjects of studies to students.The exploitation of knowledge base and knowledge representation techniques is therefore vital to the development of e-learning systems.This paper presents the development of a knowledge-based e-tutorial system that uses the Object Role Model (ORM) as its knowledge representation.The system provides Physics tutorials.It was implemented in Prolog and the knowledge base is on a relational database server
Conceptual Modelling and The Quality of Ontologies: Endurantism Vs. Perdurantism
Ontologies are key enablers for sharing precise and machine-understandable
semantics among different applications and parties. Yet, for ontologies to meet
these expectations, their quality must be of a good standard. The quality of an
ontology is strongly based on the design method employed. This paper addresses
the design problems related to the modelling of ontologies, with specific
concentration on the issues related to the quality of the conceptualisations
produced. The paper aims to demonstrate the impact of the modelling paradigm
adopted on the quality of ontological models and, consequently, the potential
impact that such a decision can have in relation to the development of software
applications. To this aim, an ontology that is conceptualised based on the
Object-Role Modelling (ORM) approach (a representative of endurantism) is
re-engineered into a one modelled on the basis of the Object Paradigm (OP) (a
representative of perdurantism). Next, the two ontologies are analytically
compared using the specified criteria. The conducted comparison highlights that
using the OP for ontology conceptualisation can provide more expressive,
reusable, objective and temporal ontologies than those conceptualised on the
basis of the ORM approach
Blockchain Enterprise Ontologies: TOVE and DEMO
Enterprise ontology for blockchain transactions includes datalogical, infological and essential levels. OntoClean analyzes ontologies based on formal, domain-independent properties (metaproperties), being the first attempt to formalize the notion of ontological analysis for computer systems. The notions are extracted from the philosophical ontology. In the semantic web, a property is a binary relationship, with a subtle distinction between ownership and class. Thus, a metaproperty is a property of a property or a class. The design of ontology can be done when there is a basic understanding of the blockchain analysis.
DOI: 10.13140/RG.2.2.23158.9632
Towards active conceptual modelling for sudden events
There are a number of issues for information systems
which are required to collect data urgently that are
not well accommodated by current conceptual modelling
methodologies and as a result the modelling
step (and the use of databases) is often omitted. Such
issues include the fact that
• the number of instances for each entity are relatively
low resulting in data definition taking a
disproportionate amount of effort,
• the storage of data and the retrieval of information
must take priority over the full definition of
a schema describing that data,
• they undergo regular structural change and are
thus subject to information loss as a result of
changes to the schema’s information capacity,
• finally, the structure of the information is likely
to be only partially known or for which there
are multiple, perhaps contradictory, competing
hypotheses as to the underlying structure.
This paper presents the Low Instance-to-Entity Ratio
(LItER) Model, which attempts to circumvent some
of the problems encountered by these types of application
and to provide a platform and modelling
technique to handle rapidly occurring phenomena.
The two-part LItER modelling process possesses an
overarching architecture which provides hypothesis,
knowledge base and ontology support together with
a common conceptual schema. This allows data to
be stored immediately and for a more refined conceptual
schema to be developed later. LItER modelling
also aims to facilitate later translation to EER, ORM
and UML models and the use of (a form of) SQL.
Moreover, an additional benefit of the model is that
it provides a partial solution to a number of outstanding
issues in current conceptual modelling systems.Sydney, NS
- …