234 research outputs found
What is the Natural Abstraction Level of an Algorithm?
acceptedVersio
Challenges and Directions in Formalizing the Semantics of Modeling Languages
Developing software from models is a growing practice and there exist many model-based tools (e.g., editors, interpreters, debuggers, and simulators) for supporting model-driven engineering. Even though these tools facilitate the automation of software engineering tasks and activities, such tools are typically engineered manually. However, many of these tools have a common semantic foundation centered around an underlying modeling language, which would make it possible to automate their development if the modeling language specification were formalized. Even though there has been much work in formalizing programming languages, with many successful tools constructed using such formalisms, there has been little work in formalizing modeling languages for the purpose of automation. This paper discusses possible semantics-based approaches for the formalization of modeling languages and describes how this formalism may be used to automate the construction of modeling tools
Requirements, design and business process reengineering as vital parts of any system development methodology
This thesis analyzes different aspects of system development life cycle, concentrating on the requirements and design stages. It describes various methodologies, methods and tools that have been developed over the years. It evaluates them and compares them against each other. Finally a conclusion is made that there is a very important stage missing in the system development life cycle, which is the Business Process Reengineering Stage
The Semantic Web … Sounds Logical!
The Semantic Web will be an enabling technology for the future because as all of life\u27s components continue to progress and evolve, the demand on us as humans will continue to increase. Work will expect more productivity; family will demand more quality time, and even leisure activities will be technologically advanced. With these variables in mind, I believe humans will demand technologies that help to simplify this treacherous lifestyle. As patterns already indicate, one of the driving forces of technological development is efficiency. Developers are consistently looking for ways to make life\u27s demands less strenuous and more streamlined. The benefits of the semantic web are two-fold. Conceptually, it will enable us to be productive at home while at work, and productive at work while at home. The Semantic Web will be a technology that truly changes our lifestyle. The Web has yet to harness its full potential. We have yet to realize that in addition to computers, other machines can actually participate in the decision-making process via the Internet. This will allow virtually all devices the opportunity to be a helpful resource for humans via the Web. It must be taken into consideration that the Semantic Web will not be separate from the World Wide Web, but an extension of it. It will allow information to be given a well-defined meaning, which will allow computers and people to work in cooperation. With this technology, humans will be able to establish connections to machines that are not currently connected to the World Wide Web. For the Semantic Web to function, computers must have access to structured collections of information and sets of inference rules that they can use to conduct automated reasoning (Scientific American: Feature Article: The Semantic Web, 3). Using rules to make inferences, choosing a course of action, and answering questions will add functional logic to the Web. Currently the Semantic Web community is developing this new Web by using Extensible Markup Language (XML) and Resource Description Framework (RDF) and ultimately, Ontologies
Atomic service-based scheduling for web services composition
With the rapid development of Internet technologies and widespread of Internet applications, Web Services has become an important research issue of World Wide Web Consortium (W3C). In order to cope with various requirements from service users, services need to be thoroughly and precisely described, thus improvement needs to be made in describing services as more properties should be added to the current service description model based on OWL-ร, an ontology structure consisting of service profiles and operations. Semantics is widely considered as one of the core supplements, which is able to provide the metadata of services, so as to better match requirements with services in the service repository. On the other hand, Web Services has attracted people from various fields to perform relevant experiments on how to cope with users' requirements. Service providers tend to coordinate service implementation by means of interacting with available resources and reconstructing existing service modules. The integration of self-contained software components becomes a key step to meet service demands. This thesis makes contributions to current service description. The introduction of the term "Atomic Service" is not only considered to be a more refined service structure, but also serves as the fundamental component for all service modules. Based on this, the thesis will discuss issues including composition and scheduling, with the purpose of building interoperations among composable service units and setting up the mechanism of realising business goals with composite services under the guidance of the service scheduling language. This notion is illustrated in a demonstration system to justify the manageable interrelationship between service modules and the way of composition
Semantic Model Alignment for Business Process Integration
Business process models describe an enterprise’s way of conducting business and in this form the basis for shaping the organization and engineering the appropriate supporting or even enabling IT. Thereby, a major task in working with models is their analysis and comparison for the purpose of aligning them. As models can differ semantically not only concerning the modeling languages used, but even more so in the way in which the natural language for labeling the model elements has been applied, the correct identification of the intended meaning of a legacy model is a non-trivial task that thus far has only been solved by humans. In particular at the time of reorganizations, the set-up of B2B-collaborations or mergers and acquisitions the semantic analysis of models of different origin that need to be consolidated is a manual effort that is not only tedious and error-prone but also time consuming and costly and often even repetitive. For facilitating automation of this task by means of IT, in this thesis the new method of Semantic Model Alignment is presented. Its application enables to extract and formalize the semantics of models for relating them based on the modeling language used and determining similarities based on the natural language used in model element labels. The resulting alignment supports model-based semantic business process integration. The research conducted is based on a design-science oriented approach and the method developed has been created together with all its enabling artifacts. These results have been published as the research progressed and are presented here in this thesis based on a selection of peer reviewed publications comprehensively describing the various aspects
Recommended from our members
Process modelling for information system description
My previous experiences and some preliminary studies of the relevant technical literature allowed me to identify several reasons for which the current state of the database theory seemed unsatisfactory and required further research. These reasons included: insufficient formalism of data semantics, misinterpretation of NULL values, inconsistencies in the concept of the universal relation, certain ambiguities in domain definition, and inadequate representation of facts and constraints.
The commonly accepted ’sequentiality’ principle in most of the current system design methodologies imposes strong restrictions on the processes that a target system is composed of. They must be algorithmic and must not be interrupted during execution; neither may they have any parallel subprocesses as their own components. This principle can no longer be considered acceptable. In very many existing systems multiple processors perform many concurrent actions that can interact with each other.
The overconcentration on data models is another disadvantage of the majority of system design methods. Many techniques pay little (or no) attention to process definition. They assume that the model of the Real World consists only of data elements and relationships among them. However, the way the processes are related to each other (in terms of precedence relation) may have considerable impact on the data model.
It has been assumed that the Real World is discretisable, i.e. it may be modelled by a structure of objects. The word object is to be interpreted in a wide sense so it can mean anything within the boundaries of this part of the Real World that is to be represented in the target system. An object may then denote a fact or a physical or abstract entity, or relationships between any of these, or relationships between relationships, or even a still more complex structure.
The fundamental hypothesis was formulated stating the necessity of considering the three aspects of modelling - syntax, semantics and behaviour, and these to be considered integrally.
A syntactic representation of an object within a target system is called a construct A construct which cannot be decomposed further (either syntactically or semantically) is defined to be an atom. Any construct is a result of the following production rules: construct ::= atom I function construct; function ::= atom I construct. This syntax forms a sentential notation.
The sentential notation allows for extensive use of denotational semantics. The meaning of a construct may be defined as a function mapping from a set of syntactic constructs to the appropriate semantic domains; these in turn appear to be sets of functions since a construct may have a meaning in more than one class of objects. Because of its functional form the meaning of a construct may be derived from the meaning of its components.
The issue of system behaviour needed further investigation and a revision of the conventional model of computing. The sequentiality principle has been rejected, concurrency being regarded as a natural property of processes. A postulate has been formulated that any potential parallelism should be constructively used for data/process design and that the process structure would affect the data model. An important distinction has been made between a process declaration - considered as a form of data or an abstraction of knowledge - and a process application that corresponds to a physical action performed by a processor, according to a specific process declaration. In principle, a process may be applied to any construct - including its own representation - and it is a matter of semantics to state whether or not it is sensible to do so. The process application mechanism has been explained in terms of formal systems theory by introducing an abstract machine with two input and two output types of channels.
The system behaviour has been described by defining a process calculus. It is based on logical and functional properties of a discrete time model and provides a means to handle expressions composed of process-variables connected by logical functors. Basic terms of the calculus are: constructs and operations (equivalence, approximation, precedence, incidence, free-parallelism, strict-parallelism). Certain properties of these operations (e.g. associativity or transitivity) allow for handling large expressions. Rules for decomposing/integrating process applications, analogous in some sense to those forming the basis for structured programming, have been derived
- …