1,292 research outputs found

    Reo + mCRL2: A Framework for Model-checking Dataflow in Service Compositions

    Get PDF
    The paradigm of service-oriented computing revolutionized the field of software engineering. According to this paradigm, new systems are composed of existing stand-alone services to support complex cross-organizational business processes. Correct communication of these services is not possible without a proper coordination mechanism. The Reo coordination language is a channel-based modeling language that introduces various types of channels and their composition rules. By composing Reo channels, one can specify Reo connectors that realize arbitrary complex behavioral protocols. Several formalisms have been introduced to give semantics to Reo. In their most basic form, they reflect service synchronization and dataflow constraints imposed by connectors. To ensure that the composed system behaves as intended, we need a wide range of automated verification tools to assist service composition designers. In this paper, we present our framework for the verification of Reo using the toolset. We unify our previous work on mapping various semantic models for Reo, namely, constraint automata, timed constraint automata, coloring semantics and the newly developed action constraint automata, to the process algebraic specification language of , address the correctness of this mapping, discuss tool support, and present a detailed example that illustrates the use of Reo empowered with for the analysis of dataflow in service-based process models

    Reo + mCRL2: A Framework for Model-Checking Dataflow in Service Compositions

    Get PDF
    The paradigm of service-oriented computing revolutionized the field of software engineering. According to this paradigm, new systems are composed of existing stand-alone services to support complex cross-organizational business processes. Correct communication of these services is not possible without a proper coordination mechanism. The Reo coordination language is a channel-based modeling language that introduces various types of channels and their composition rules. By composing Reo channels, one can specify Reo connectors that realize arbitrary complex behavioral protocols. Several formalisms have been introduced to give semantics to Reo. In their most basic form, they reflect service synchronization and dataflow constraints imposed by connectors. To ensure that the composed system behaves as intended, we need a wide range of automated verification tools to assist service composition designers. In this paper, we present our framework for the verification of Reo using the mCRL2 toolset. We unify our previous work on mapping various semantic models for Reo, namely, constraint automata, timed constraint automata, coloring semantics and the newly developed action constraint automata, to the process algebraic specification language of mCRL2, address the correctness of this mapping, discuss tool support, and present a detailed example that illustrates the use of Reo empowered with mCRL2 for the analysis of dataflow in service-based process models

    Iterchanging Discrete Event Simulationprocess Interaction Modelsusing The Web Ontology Language - Owl

    Get PDF
    Discrete event simulation development requires significant investments in time and resources. Descriptions of discrete event simulation models are associated with world views, including the process interaction orientation. Historically, these models have been encoded using high-level programming languages or special purpose, typically vendor-specific, simulation languages. These approaches complicate simulation model reuse and interchange. The current document-centric World Wide Web is evolving into a Semantic Web that communicates information using ontologies. The Web Ontology Language OWL, was used to encode a Process Interaction Modeling Ontology for Discrete Event Simulations (PIMODES). The PIMODES ontology was developed using ontology engineering processes. Software was developed to demonstrate the feasibility of interchanging models from commercial simulation packages using PIMODES as an intermediate representation. The purpose of PIMODES is to provide a vendor-neutral open representation to support model interchange. Model interchange enables reuse and provides an opportunity to improve simulation quality, reduce development costs, and reduce development times

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    Special Theme of Research in Information Systems Analysis and Design -II. Data Modeling or Functional Modeling - Which Comes First? An Experimental Comparison

    Get PDF
    The software analysis process consists of two main activities: data modeling and functional modeling. While traditional development methodologies usually emphasize functional modeling via dataflow diagrams (DFDs), object-oriented (OO) methodologies emphasize data modeling via class diagrams. UML includes techniques for both data and functional modeling which are used in different methodologies in different ways and orders. This article is concerned with the ordering of modeling activities in the analysis stage. The main issue we address is whether it is better to create a functional model first and then a data model, or vice versa. We conduct a comparative experiment in which the two opposing orders are examined. We use the FOOM methodology as a platform for the experiment as it enables the creation of both a data model (a class diagram) and a functional model (hierarchical OO-DFDs), which are synchronized. The results of the experiment show that an analysis process that begins with data modeling provides better specifications than one that begins with functional modeling

    Workshop on the EHCR

    Get PDF
    This deliverable provides a summary report of a workshop on Electronic Health Records that was organised and delivered as the main focus of Workpackage 16 of the Semantic Mining project. The workshop was held as day three of a three-day series of events held in Brussels in late November 2004, under the umbrella and with kind support of the EUROREC organisation. This report provides a brief summary of that event, and includes in Annex 1 the complete delegate pack as printed and issued to all persons attending the event, This delegate pack included printed copies of all slides and screenshots used throughout the day. The workshop was well attended, and in particular the organisers are pleased to report that some very productive discussions took place that will act as the stimulus for new threads of research collaboration between various Semantic Mining partners, under the work plan of Workpackage 26. The organisers are grateful for the support of the EUROREC organisation in facilitating the organisation of this workshop and for lending their support to it through their web site and a personal endorsement of the event

    Challenges of Developing New Classes of NASA Self-Managing Mission

    Get PDF
    NASA is proposing increasingly complex missions that will require a high degree of autonomy and autonomicity. These missions pose hereto unforeseen problems and raise issues that have not been well-addressed by the community. Assuring success of such missions will require new software development techniques and tools. This paper discusses some of the challenges that NASA and the rest of the software development community are facing in developing these ever-increasingly complex systems. We give an overview of a proposed NASA mission as well as techniques and tools that are being developed to address autonomic management and the complexity issues inherent in these missions
    corecore