8 research outputs found

    A Systematic Literature Review of Applications of the Physics of Notation

    Get PDF

    Designing the Didactic Strategy Modeling Language (DSML) From PoN: An Activity Oriented EML Proposal

    Full text link
    [EN] This paper presents the design of the didactic strategy modeling language (DSML) according to the principles of Physics of Notations (PoN). The DSML is a visual and activity-oriented language for learning design characterized by the representation of different activities according to the nature of the task. Once the language is designed, a blind interpretation study is conducted to validate the semantic transparency of the learning activity iconography. The results of the paper allow to refine the icons. In addition to this, an authoring tool for DSML, which is integrated to an LMS, is presented. As a result, a model driven course was designed as a DSML pre-validation.Ruiz, A.; Panach Navarrete, JI.; Pastor López, O.; Giraldo-Velásquez, FD.; Arciniegas, JL.; Giraldo, WJ. (2018). Designing the Didactic Strategy Modeling Language (DSML) From PoN: An Activity Oriented EML Proposal. IEEE-RITA: Latin-American Learning Technologies Journal. 13(4):136-143. https://doi.org/10.1109/RITA.2018.2879262S13614313

    Multi-criteria decision analysis for non-conformance diagnosis: A priority-based strategy combining data and business rules

    Get PDF
    Business process analytics and verification have become a major challenge for companies, especially when process data is stored across different systems. It is important to ensure Business Process Compliance in both data-flow perspectives and business rules that govern the organisation. In the verification of data-flow accuracy, the conformance of data to business rules is a key element, since essential to fulfil policies and statements that govern corporate behaviour. The inclusion of business rules in an existing and already deployed process, which therefore already counts on stored data, requires the checking of business rules against data to guarantee compliance. If inconsistency is detected then the source of the problem should be determined, by discerning whether it is due to an erroneous rule or to erroneous data. To automate this, a diagnosis methodology following the incorporation of business rules is proposed, which simultaneously combines business rules and data produced during the execution of the company processes. Due to the high number of possible explanations of faults (data and/or business rules), the likelihood of faults has been included to propose an ordered list. In order to reduce these possibilities, we rely on the ranking calculated by means of an AHP (Analytic Hierarchy Process) and incorporate the experience described by users and/or experts. The methodology proposed is based on the Constraint Programming paradigm which is evaluated using a real example. .Ministerio de Ciencia y Tecnología RTI2018–094283-B-C3

    Repairing Alignments of Process Models

    Get PDF
    Process mining represents a collection of data driven techniques that support the analysis, understanding and improvement of business processes. A core branch of process mining is conformance checking, i.e., assessing to what extent a business process model conforms to observed business process execution data. Alignments are the de facto standard instrument to compute such conformance statistics. However, computing alignments is a combinatorial problem and hence extremely costly. At the same time, many process models share a similar structure and/or a great deal of behavior. For collections of such models, computing alignments from scratch is inefficient, since large parts of the alignments are likely to be the same. This paper presents a technique that exploits process model similarity and repairs existing alignments by updating those parts that do not fit a given process model. The technique effectively reduces the size of the combinatorial alignment problem, and hence decreases computation time significantly. Moreover, the potential loss of optimality is limited and stays within acceptable bounds

    Äriprotsesside ajaliste näitajate selgitatav ennustav jälgimine

    Get PDF
    Kaasaegsed ettevõtte infosüsteemid võimaldavad ettevõtetel koguda detailset informatsiooni äriprotsesside täitmiste kohta. Eelnev koos masinõppe meetoditega võimaldab kasutada andmejuhitavaid ja ennustatavaid lähenemisi äriprotsesside jõudluse jälgimiseks. Kasutades ennustuslike äriprotsesside jälgimise tehnikaid on võimalik jõudluse probleeme ennustada ning soovimatu tegurite mõju ennetavalt leevendada. Tüüpilised küsimused, millega tegeleb ennustuslik protsesside jälgimine on “millal antud äriprotsess lõppeb?” või “mis on kõige tõenäolisem järgmine sündmus antud äriprotsessi jaoks?”. Suurim osa olemasolevatest lahendustest eelistavad täpsust selgitatavusele. Praktikas, selgitatavus on ennustatavate tehnikate tähtis tunnus. Ennustused, kas protsessi täitmine ebaõnnestub või selle täitmisel võivad tekkida raskused, pole piisavad. On oluline kasutajatele seletada, kuidas on selline ennustuse tulemus saavutatud ning mida saab teha soovimatu tulemuse ennetamiseks. Töö pakub välja kaks meetodit ennustatavate mudelite konstrueerimiseks, mis võimaldavad jälgida äriprotsesse ning keskenduvad selgitatavusel. Seda saavutatakse ennustuse lahtivõtmisega elementaarosadeks. Näiteks, kui ennustatakse, et äriprotsessi lõpuni on jäänud aega 20 tundi, siis saame anda seletust, et see aeg on moodustatud kõikide seni käsitlemata tegevuste lõpetamiseks vajalikust ajast. Töös võrreldakse omavahel eelmainitud meetodeid, käsitledes äriprotsesse erinevatest valdkondadest. Hindamine toob esile erinevusi selgitatava ja täpsusele põhinevale lähenemiste vahel. Töö teaduslik panus on ennustuslikuks protsesside jälgimiseks vabavaralise tööriista arendamine. Süsteemi nimeks on Nirdizati ning see süsteem võimaldab treenida ennustuslike masinõppe mudeleid, kasutades nii töös kirjeldatud meetodeid kui ka kolmanda osapoole meetodeid. Hiljem saab treenitud mudeleid kasutada hetkel käivate äriprotsesside tulemuste ennustamiseks, mis saab aidata kasutajaid reaalajas.Modern enterprise systems collect detailed data about the execution of the business processes they support. The widespread availability of such data in companies, coupled with advances in machine learning, have led to the emergence of data-driven and predictive approaches to monitor the performance of business processes. By using such predictive process monitoring approaches, potential performance issues can be anticipated and proactively mitigated. Various approaches have been proposed to address typical predictive process monitoring questions, such as what is the most likely continuation of an ongoing process instance, or when it will finish. However, most existing approaches prioritize accuracy over explainability. Yet in practice, explainability is a critical property of predictive methods. It is not enough to accurately predict that a running process instance will end up in an undesired outcome. It is also important for users to understand why this prediction is made and what can be done to prevent this undesired outcome. This thesis proposes two methods to build predictive models to monitor business processes in an explainable manner. This is achieved by decomposing a prediction into its elementary components. For example, to explain that the remaining execution time of a process execution is predicted to be 20 hours, we decompose this prediction into the predicted execution time of each activity that has not yet been executed. We evaluate the proposed methods against each other and various state-of-the-art baselines using a range of business processes from multiple domains. The evaluation reaffirms a fundamental trade-off between explainability and accuracy of predictions. The research contributions of the thesis have been consolidated into an open-source tool for predictive business process monitoring, namely Nirdizati. It can be used to train predictive models using the methods described in this thesis, as well as third-party methods. These models are then used to make predictions for ongoing process instances; thus, the tool can also support users at runtime

    Goal-based Workflow Adaptation for Role-based Resources in the Internet of Things

    Get PDF
    In recent years, the Internet of Things (IoT) has increasingly received attention from the Business Process Management (BPM) community. The integration of sensors and actuators into Process-Aware Information Systems (PAIS) enables the collection of real-time data about physical properties and the direct manipulation of real-world objects. In a broader sense, IoT-aware workflows provide means for context-aware workflow execution involving virtual and physical entities. However, IoT-aware workflow management imposes new requirements on workflow modeling and execution that are outside the scope of current modeling languages and workflow management systems. Things in the IoT may vanish, appear or stay unknown during workflow execution, which renders their allocation as workflow resources infeasible at design time. Besides, capabilities of Things are often intended to be available only in a particular real-world context at runtime, e.g., a service robot inside a smart home should only operate at full speed, if there are no residents in direct proximity. Such contextual restrictions for the dynamic exposure of resource capabilities are not considered by current approaches in IoT resource management that use services for exposing device functionalities. With this work, we aim at providing the modeling and runtime support for defining such restrictions on workflow resources at design time and enabling the dynamic and context-sensitive runtime allocation of Things as workflow resources. To achieve this goal, we propose contributions to the fields of resource management, i.e., resource perspective, and workflow management in the Internet of Things (IoT), divided into the user perspective representing the workflow modeling phase and the workflow perspective representing the runtime resource allocation phase. In the resource perspective, we propose an ontology for the modeling of Things, Roles, capabilities, physical entities, and their context-sensitive interrelations. The concept of Role is used to define non-exclusive subsets of capabilities of Things. A Thing can play a certain Role only under certain contextual restrictions defined by Semantic Web Rule Language (SWRL) rules. At runtime, the existing relations between the individuals of the ontology represent the current state of interactions between the physical and the cyber world. Through the dynamic activation and deactivation of Roles at runtime, the behavior of a Thing can be adapted to the current physical context. In the user perspective, we allow workflow modelers to define the goal of a workflow activity either by using semantic queries or by specifying high-level goals from a Tropos goal model. The goal-based modeling of workflow activities provides the most flexibility regarding the resource allocation as several leaf goals may fulfill the user specified activity goal. Furthermore, the goal model can include additional Quality of Service (QoS) parameters and the positive or negative contribution of goals towards these parameters. The workflow perspective includes the Semantic Access Layer (SAL) middleware to enable the transformation of activity goals into semantic queries as well as their execution on the ontology for role-based Things. The SAL enables the discovery of fitting Things, their allocation as workflow resources, the invocation of referenced IoT services, and the continuous monitoring of the allocated Things as part of the ontology. We show the feasibility and added value of this work in relation to related approaches by evaluation within several application scenarios in a smart home setting. We compare the fulfillment of quantified criteria for IoT-aware workflow management based on requirements extracted from related research. The evaluation shows, that our approach enables an increase in the context-aware modeling of Things as workflow resources, in the query support for workflow resource allocation, and in the modeling support of activities using Things as workflow resources.:1 Introduction 15 1.1 Background 17 1.2 Motivation 17 1.3 Aim and Objective 19 1.3.1 Research Questions and Scope 19 1.3.2 Research Goals 20 1.4 Contribution 20 1.5 Outline 21 2 Background for Workflows in the IoT 23 2.1 Resource Perspective 24 2.1.1 Internet of Things 24 2.1.2 Context and Role Modeling 27 2.2 User Perspective 37 2.2.1 Goal Modeling 38 2.2.2 Tropos Goal Modeling Language 38 2.3 Workflow Perspective 39 2.3.1 Workflow Concepts 39 2.3.2 Workflow Modeling 40 2.3.3 Internet of Things-aware Workflow Management 43 2.4 Summary 44 3 Requirements Analysis and Approach 45 3.1 Requirements 45 3.1.1 IoT Resource Perspective 46 3.1.2 Workflow Resource Perspective 50 3.1.3 Relation to Research Questions 51 3.2 State of the Art Analysis 53 3.2.1 Fulfillment Criteria 54 3.2.2 IoT-aware workflow management 56 3.3 Discussion 65 3.4 Approach 70 3.4.1 Contribution to IoT-aware workflow management 71 3.5 Summary 73 4 Concept for Adaptive Workflow Activities in the IoT 75 4.1 Resource Perspective 75 4.1.1 Role-based Things 75 4.1.2 Semantic Modeling Concepts 79 4.1.3 SWRL Modeling Concepts 81 4.2 User Perspective 81 4.2.1 Semantic Queries in Workflow Activites 81 4.2.2 Goals for Workflow Activites 81 4.2.3 Mapping from Goals to Semantic Queries 82 4.3 Workflow Perspective 83 4.3.1 Workflow metamodel Extensions 83 4.3.2 Middleware for Dynamic Resource Discovery and Allocation 85 4.4 Summary 86 5 Modeling Adaptive Workflow Activities in the IoT 87 5.1 Resource Perspective 87 5.1.1 Role-based Modeling of Context-sensitive Things 87 5.1.2 Ontology Classes 90 5.1.3 Ontology Object properties 93 5.1.4 Ontology Data properties 99 5.1.5 DL-safe SWRL Rules 100 5.2 Discussion of Role Modeling Features 101 5.3 Example Application Scenario Modeling 102 5.3.1 Resource Perspective 102 5.3.2 User Perspective 105 5.3.3 Workflow Perspective 109 5.4 Summary 113 6 Architecture for Adaptive Workflow Activities in the IoT 115 6.1 Overview of the System Architecture 115 6.2 Specification of System Components 117 6.2.1 Resource Perspective 118 6.2.2 User Perspective 118 6.2.3 Workflow Perspective 118 6.3 Summary 123 7 Implementation of Adaptive Workflow Activities in the IoT 125 7.1 Resource Perspective 125 7.2 Workflow Perspective 125 7.2.1 PROtEUS 125 7.2.2 Semantic Access Layer 127 7.3 User Perspective 128 7.4 Summary 128 8 Evaluation 129 8.1 Goal and Evaluation Approach 129 8.1.1 Definition of Test Cases 130 8.2 Scenario Evaluation 134 8.2.1 Ambient Assisted Living Setting 135 8.2.2 Resource Perspective 135 8.2.3 User Perspective 137 8.2.4 Workflow Perspective 138 8.2.5 Execution of Test Cases 139 8.2.6 Discussion of Results 146 8.3 Performance Evaluation 148 8.3.1 Experimental Setup 148 8.3.2 Discussion of Results 151 8.4 Summary 152 9 Discussion 153 9.1 Comparison of Solution to Research Questions 153 9.2 Extendability of the Solutions 155 9.3 Limitations 156 10 Summary and Future Work 157 10.1 Summary of the Thesis 157 10.2 Future Work 159 Appendix 161 Example Semantic Context Model for IoT-Things 171 T-Box of Ontology for Role-based Things in the IoT 178 A-Box for Example Scenario Model 201 A-Box for Extended Example Scenario Model 21

    Optimierung von Verfahren zur Lösung rechtsrelevanter Wissensprobleme in kritischen Infrastrukturen : Befunde im Smart Grid und technikrechtliche Empfehlungen

    Get PDF
    Die Arbeit befasst sich mit dem vielschichtigen Interessenausgleich sowie der technikrechtlichen Regulierung an der Schnittstelle von Energiewirtschaftsrecht und Datenschutzrecht. Im Smart Grid wird für die Optimierung von Regulierungswissen ein Verfahren der Bundesnetzagentur vorgeschlagen. Durch dessen Anreicherung um Aspekte der datenschutzkonformen Technikgestaltung kann eine Komplexitätsreduzierung auch durch Visualisierung - angelehnt an das Bau- und Planungsrecht - erreicht werden
    corecore