6 research outputs found

    Supporting Information Systems Analysis Through Conceptual Model Query – The Diagramed Model Query Language (DMQL)

    Get PDF
    Analyzing conceptual models such as process models, data models, or organizational charts is useful for several purposes in information systems engineering (e.g., for business process improvement, compliance management, model driven software development, and software alignment). To analyze conceptual models structurally and semantically, so-called model query languages have been put forth. Model query languages take a model pattern and conceptual models as input and return all subsections of the models that match this pattern. Existing model query languages typically focus on a single modeling language and/or application area (such as analysis of execution semantics of process models), are restricted in their expressive power of representing model structures, and/or abstain from graphical pattern specification. Because these restrictions may hamper query languages from propagating into practice, we close this gap by proposing a modeling language-spanning structural model query language based on flexible graph search that, hence, provides high structural expressive power. To address ease-of-use, it allows one to specify model queries using a diagram. In this paper, we present the syntax and the semantics of the diagramed model query language (DMQL), a corresponding search algorithm, an implementation as a modeling tool prototype, and a performance evaluation

    Live Query - Visualized Process Analysis

    Get PDF
    Business process management (BPM) becomes continuously challenging through a steadily increasing number and even more complex processes. For enabling an effective and efficient control of business processes, (semi-)automatic approaches are necessary as a supporting means. However, these approaches are often hardly applicable in practice since they lack a broad applicability or an acceptable ease of use. This work aims to close this gap by providing an approach that supports a widely applicable, (semi-)automatic analysis of business process models and makes the analysis comprehensible using a graphical visualization

    Context-Aware Querying and Injection of Process Fragments in Process-Aware Information Systems

    Get PDF
    Cyber-physical systems (CPS) are often customized to meet customer needs and, hence, exhibit a large number of hard-/software configuration variants. Consequently, the processes deployed on a CPS need to be configured to the respective CPS variant. This includes both configuration at design time (i.e., before deploying the implemented processes on the CPS) and runtime configuration taking the current context of the CPS into account. Such runtime process configuration is by far not trivial, e.g., alternative process fragments may have to be selected at certain points during process execution of which one fragment is then dynamically applied to the process at hand. Contemporary approaches focus on the design time configuration of processes, while neglecting runtime configuration to cope with process variability. In this paper, a generic approach enabling context-aware process configuration at runtime is presented. With the Process Query Language process fragments can be flexibly selected from a process repository, and then be dynamically injected into running process instances depending on the respective contextual situations. The latter can be automatically derived from context factors, e.g., sensor data or configuration parameters of the given CPS. Altogether, the presented approach allows for a flexible configuration and late composition of process instances at runtime, as required in many application domains and scenarios

    Process Mining Supported Process Redesign: Matching Problems with Solutions

    Get PDF
    Process mining is a widely used technique to understand and analyze business process executions through event data. It offers insights into process problems but leaves analysts barehanded to translate these problems into concrete solutions. Research on business process management discusses both process mining and improvement patterns in isolation. In this paper, we address this research gap. More specifically, we identify six categories of process problems that can be identified with process mining and map them to applicable best practices of business processes. We analyze the relevance of our approach using a thematic analysis of reports that were handed in to the Business Process Intelligence Challenges over recent years, and observe the dire need for better guidance to translate process problems identified by process mining into suitable process designs. Conceptually, we position process mining into the problem and solution space of process redesign and thereby offer a language to describe potentials and limitations of the technique

    Design principles for ensuring compliance in business processes

    Get PDF
    In this thesis, we evaluate the complexity and understandability of compliance languages. First, to calculate the complexity, we apply established software metrics and interpret the results with respect to the languages’ expressiveness. Second, to investigate the languages’ understandability, we use a cognitive model of the human problem-solving process and analyze how efficiently users perform a compliance modeling task. Our results have theoretical and practical implications that give directions for the development of compliance languages, and rule-based languages in general.Diese Arbeit beurteilt die Komplexität und Verständlichkeit von Compliance-Sprachen. Zur Messung der Komplexität wenden wir etablierte Software-Metriken an und interpretieren die Ergebnisse in Hinblick auf die Aussagekraft der Sprachen. Zur Untersuchung der Verständlichkeit verwenden wir ein kognitives Modell und analysieren, wie effizient eine Compliance-Sprache zur Lösung eines Modellierungsproblems eingesetzt wird. Unsere Ergebnisse haben theoretische und praktische Implikationen für die Entwicklung von Compliance-Sprachen und anderen regelbasierten Sprachen

    Agnostic content ontology design patterns for a multi-domain ontology

    Get PDF
    This research project aims to solve the semantic heterogeneity problem. Semantic heterogeneity mimics cancer in that semantic heterogeneity unnecessarily consumes resources from its host, the enterprise, and may even affect lives. A number of authors report that semantic heterogeneity may cost a significant portion of an enterprise’s IT budget. Also, semantic heterogeneity hinders pharmaceutical and medical research by consuming valuable research funds. The RA-EKI architecture model comprises a multi-domain ontology, a cross-industry agnostic construct composed of rich axioms notably for data integration. A multi-domain ontology composed of axiomatized agnostic data model patterns would drive a cognitive data integration application system usable in any industry sector. This project’s objective is to elicit agnostic data model patterns here considered as content ontology design patterns. The first research question of this project pertains to the existence of agnostic patterns and their capacity to solve the semantic heterogeneity problem. Due to the theory-building role of this project, a qualitative research approach constitutes the appropriate manner to conduct its research. Contrary to theory testing quantitative methods that rely on well-established validation techniques to determine the reliability of the outcome of a given study, theorybuilding qualitative methods do not possess standardized techniques to ascertain the reliability of a study. The second research question inquires on a dual method theory-building approach that may demonstrate trustworthiness. The first method, a qualitative Systematic Literature Review (SLR) approach induces the sought knowledge from 69 retained publications using a practical screen. The second method, a phenomenological research protocol elicits the agnostic concepts from semi-structured interviews involving 22 senior practitioners with 21 years in average of experience in conceptualization. The SLR retains a set of 89 agnostic concepts from 2009 through 2017. The phenomenological study in turn retains 83 agnostic concepts. During the synthesis stage for both studies, data saturation was calculated for each of the retained concepts at the point where the concepts have been selected for a second time. The quantification of data saturation constitutes an element of the trustworthiness’s transferability criterion. It can be argued that this effort of establishing the trustworthiness, i.e. credibility, dependability, confirmability and transferability can be construed as extensive and this research track as promising. Data saturation for both studies has still not been reached. The assessment performed in the course of the establishment of trustworthiness of this project’s dual method qualitative research approach yields very interesting findings. Such findings include two sets of agnostic data model patterns obtained from research protocols using radically different data sources i.e. publications vs. experienced practitioners but with striking similarities. Further work is required using exactly the same protocols for each of the methods, expand the year range for the SLR and to recruit new co-researchers for the phenomenological protocol. This work will continue until these protocols do not elicit new theory material. At this point, new protocols for both methods will be designed and executed with the intent to measure theoretical saturation. For both methods, this entails in formulating new research questions that may, for example, focus on agnostic themes such as finance, infrastructure, relationships, classifications, etc. For this exploration project, the road ahead involves the design of new questionnaires for semi-structured interviews. This project will need to engage in new knowledge elicitation techniques such as focus groups. The project will definitely conduct other qualitative research methods such as research action for eliciting new knowledge and know-how from actual development and operation of an ontology-based cognitive application. Finally, a mixed methods qualitative-quantitative approach would prepare the transition toward theory testing method using hypothetico-deductive techniques
    corecore