5 research outputs found

    Comparing Safety Analysis Based on Sequence Diagrams and Textual Use Cases

    No full text
    International audienceSafety is of growing importance for information systems due to increased integration with embedded systems. Discovering potential hazards as early as possible in the development is key to avoid costly redesign later. This implies that hazards should be identified based on the requirements, and it is then useful to compare various specification techniques to find out the strengths and weaknesses of each with respect to finding and documenting hazards. This paper reports on two experiments in hazards identification - one experiment based on textual use cases and one based on systems sequence diagrams. The comparison of the experimental results reveal that use cases are better for identifying hazards related to the operation of the system while system sequence diagrams are better for the identification of hazards related to the system itself. The combination of these two techniques is therefore likely to uncover more hazards than one technique alone

    Do Declarative Process Models Help to Reduce Cognitive Biases Related to Business Rules?

    Get PDF
    Declarative process modeling languages, such as Declare, represent processes by means of temporal rules, namely constraints. Those languages typically come endowed with a graphical notation to draw such models diagrammatically. In this paper, we explore the effects of diagrammatic representation on humans’ deductive reasoning involved in the analysis and compliance checking of declarative process models. In an experiment, we compared textual descriptions of business rules against textual descriptions that were supplemented with declarative models. Results based on a sample of 75 subjects indicate that the declarative process models did not improve but rather lowered reasoning performance. Thus, for novice users, using the graphical notation of Declare may not help readers properly understand business rules: they may confuse them in comparison to textual descriptions. A likely explanation of the negative effect of graphical declarative models on human reasoning is that readers interpret edges wrongly. This has implications for the practical use of business rules on the one hand and the design of declarative process modeling languages on the other

    Do Declarative Process Models Help to Reduce Cognitive Biases Related to Business Rules?

    No full text
    Declarative process modeling languages, such as Declare, represent processes by means of temporal rules, namely constraints. Those languages typically come endowed with a graphical notation to draw such models diagrammatically. In this paper, we explore the effects of diagrammatic representation on humans' deductive reasoning involved in the analysis and compliance checking of declarative process models. In an experiment, we compared textual descriptions of business rules against textual descriptions that were supplemented with declarative models. Results based on a sample of 75 subjects indicate that the declarative process models did not improve but rather lowered reasoning performance. Thus, for novice users, using the graphical notation of Declare may not help readers properly understand business rules: they may confuse them in comparison to textual descriptions. A likely explanation of the negative effect of graphical declarative models on human reasoning is that readers interpret edges wrongly. This has implications for the practical use of business rules on the one hand and the design of declarative process modeling languages on the other
    corecore