7 research outputs found

    An Approach for the Automated Generation of Engaging Dashboards

    Get PDF
    Organizations use Key Performance Indicators (KPIs) to monitor whether they attain their goals. To support organizations at tracking the performance of their business, software vendors offer dash boards to these organizations. For the development of the dashboards that will engage organizations and enable them to make informed deci sions, software vendors leverage dashboard design principles. However, the dashboard design principles available in the literature are expressed as natural language texts. Therefore, software vendors and organizations either do not use them or spend significant efforts to internalize and apply them literally in every engaging dashboard development process. We show that engaging dashboards for organizations can be automati cally generated by means of automatically visualized KPIs. In this con text, we present our novel approach for the automated generation of engaging dashboards for organizations. The approach employs the deci sion model for visualizing KPIs that is developed based on the dashboard design principles in the literature. We implemented our approach and evaluated its quality in a case study.Ministerio de EconomĂ­a y Competitividad BELI (TIN2015-70560-R)Ministerio de Ciencia, InnovaciĂłn y Universidades OPHELIA RTI2018-101204-B-C2

    Agile data warehousing to support process monitoring

    Get PDF
    Heutige Geschäftsprozesse nehmen ständig an Komplexität zu. Hiermit einhergehend werden die Anpassung und Optimierung einzelner Teilprozesse oder des Gesamtprozesses nötig. Gleichzeitig können über alle Bereiche eines Prozesses hinweg Daten gesammelt werden, welche durch Analyse diese Anpassung und Optimierung ermöglichen. Hierfür müssen die anfallenden Daten dauerhaft gespeichert und bereitgestellt werden. Dies geschieht in Form von Data Warehouses. Da die Erstellung und Wartung dieser Data Warehouse Strukturen teuer und zeitaufwändig ist, werden Versuche unternommen, die hierfür nötigen Arbeiten zu minimieren bzw. zu automatisieren. In dieser Arbeit wird, aufbauend auf Praxisanforderungen und bisherigen Ansät- zen, zur automatisierten Prozessüberwachung ein Konzept für die Erzeugung eines agilen Data Warehouses entwickelt. Dies erfordert die automatisierte Erstellung und Einrichtung der für ein Data Warehouse nötigen Strukturen sowie deren automatische Anpassung bei Veränderung der Umgebungsbedingungen. Zusätzlich müssen Möglichkeiten für das Befüllen dieser Strukturen mit Daten geschaffen werden. Hierfür werden zwei Komponenten entwickelt, welche alle benötigten Data Warehouse Strukturen erzeugen und diese mit Hilfe eines BI-Servers bereitstellen. Hierbei werden Datenbankschemata, OLAP-Schemata sowie ETL-Prozesse automatisiert erzeugt. Das entwickelte Konzept wird in einen Gesamtprozess integriert, welcher eine Business Intelligence Lösung als Software-As-A-Service-Lösung ermöglicht

    Business process performance measurement : a structured literature review of indicators, measures and metrics

    Get PDF
    Measuring the performance of business processes has become a central issue in both academia and business, since organizations are challenged to achieve effective and efficient results. Applying performance measurement models to this purpose ensures alignment with a business strategy, which implies that the choice of performance indicators is organization-dependent. Nonetheless, such measurement models generally suffer from a lack of guidance regarding the performance indicators that exist and how they can be concretized in practice. To fill this gap, we conducted a structured literature review to find patterns or trends in the research on business process performance measurement. The study also documents an extended list of 140 process-related performance indicators in a systematic manner by further categorizing them into 11 performance perspectives in order to gain a holistic view. Managers and scholars can consult the provided list to choose the indicators that are of interest to them, considering each perspective. The structured literature review concludes with avenues for further research

    Self-managed Workflows for Cyber-physical Systems

    Get PDF
    Workflows are a well-established concept for describing business logics and processes in web-based applications and enterprise application integration scenarios on an abstract implementation-agnostic level. Applying Business Process Management (BPM) technologies to increase autonomy and automate sequences of activities in Cyber-physical Systems (CPS) promises various advantages including a higher flexibility and simplified programming, a more efficient resource usage, and an easier integration and orchestration of CPS devices. However, traditional BPM notations and engines have not been designed to be used in the context of CPS, which raises new research questions occurring with the close coupling of the virtual and physical worlds. Among these challenges are the interaction with complex compounds of heterogeneous sensors, actuators, things and humans; the detection and handling of errors in the physical world; and the synchronization of the cyber-physical process execution models. Novel factors related to the interaction with the physical world including real world obstacles, inconsistencies and inaccuracies may jeopardize the successful execution of workflows in CPS and may lead to unanticipated situations. This thesis investigates properties and requirements of CPS relevant for the introduction of BPM technologies into cyber-physical domains. We discuss existing BPM systems and related work regarding the integration of sensors and actuators into workflows, the development of a Workflow Management System (WfMS) for CPS, and the synchronization of the virtual and physical process execution as part of self-* capabilities for WfMSes. Based on the identified research gap, we present concepts and prototypes regarding the development of a CPS WFMS w.r.t. all phases of the BPM lifecycle. First, we introduce a CPS workflow notation that supports the modelling of the interaction of complex sensors, actuators, humans, dynamic services and WfMSes on the business process level. In addition, the effects of the workflow execution can be specified in the form of goals defining success and error criteria for the execution of individual process steps. Along with that, we introduce the notion of Cyber-physical Consistency. Following, we present a system architecture for a corresponding WfMS (PROtEUS) to execute the modelled processes-also in distributed execution settings and with a focus on interactive process management. Subsequently, the integration of a cyber-physical feedback loop to increase resilience of the process execution at runtime is discussed. Within this MAPE-K loop, sensor and context data are related to the effects of the process execution, deviations from expected behaviour are detected, and compensations are planned and executed. The execution of this feedback loop can be scaled depending on the required level of precision and consistency. Our implementation of the MAPE-K loop proves to be a general framework for adding self-* capabilities to WfMSes. The evaluation of our concepts within a smart home case study shows expected behaviour, reasonable execution times, reduced error rates and high coverage of the identified requirements, which makes our CPS~WfMS a suitable system for introducing workflows on top of systems, devices, things and applications of CPS.:1. Introduction 15 1.1. Motivation 15 1.2. Research Issues 17 1.3. Scope & Contributions 19 1.4. Structure of the Thesis 20 2. Workflows and Cyber-physical Systems 21 2.1. Introduction 21 2.2. Two Motivating Examples 21 2.3. Business Process Management and Workflow Technologies 23 2.4. Cyber-physical Systems 31 2.5. Workflows in CPS 38 2.6. Requirements 42 3. Related Work 45 3.1. Introduction 45 3.2. Existing BPM Systems in Industry and Academia 45 3.3. Modelling of CPS Workflows 49 3.4. CPS Workflow Systems 53 3.5. Cyber-physical Synchronization 58 3.6. Self-* for BPM Systems 63 3.7. Retrofitting Frameworks for WfMSes 69 3.8. Conclusion & Deficits 71 4. Modelling of Cyber-physical Workflows with Consistency Style Sheets 75 4.1. Introduction 75 4.2. Workflow Metamodel 76 4.3. Knowledge Base 87 4.4. Dynamic Services 92 4.5. CPS-related Workflow Effects 94 4.6. Cyber-physical Consistency 100 4.7. Consistency Style Sheets 105 4.8. Tools for Modelling of CPS Workflows 106 4.9. Compatibility with Existing Business Process Notations 111 5. Architecture of a WfMS for Distributed CPS Workflows 115 5.1. Introduction 115 5.2. PROtEUS Process Execution System 116 5.3. Internet of Things Middleware 124 5.4. Dynamic Service Selection via Semantic Access Layer 125 5.5. Process Distribution 126 5.6. Ubiquitous Human Interaction 130 5.7. Towards a CPS WfMS Reference Architecture for Other Domains 137 6. Scalable Execution of Self-managed CPS Workflows 141 6.1. Introduction 141 6.2. MAPE-K Control Loops for Autonomous Workflows 141 6.3. Feedback Loop for Cyber-physical Consistency 148 6.4. Feedback Loop for Distributed Workflows 152 6.5. Consistency Levels, Scalability and Scalable Consistency 157 6.6. Self-managed Workflows 158 6.7. Adaptations and Meta-adaptations 159 6.8. Multiple Feedback Loops and Process Instances 160 6.9. Transactions and ACID for CPS Workflows 161 6.10. Runtime View on Cyber-physical Synchronization for Workflows 162 6.11. Applicability of Workflow Feedback Loops to other CPS Domains 164 6.12. A Retrofitting Framework for Self-managed CPS WfMSes 165 7. Evaluation 171 7.1. Introduction 171 7.2. Hardware and Software 171 7.3. PROtEUS Base System 174 7.4. PROtEUS with Feedback Service 182 7.5. Feedback Service with Legacy WfMSes 213 7.6. Qualitative Discussion of Requirements and Additional CPS Aspects 217 7.7. Comparison with Related Work 232 7.8. Conclusion 234 8. Summary and Future Work 237 8.1. Summary and Conclusion 237 8.2. Advances of this Thesis 240 8.3. Contributions to the Research Area 242 8.4. Relevance 243 8.5. Open Questions 245 8.6. Future Work 247 Bibliography 249 Acronyms 277 List of Figures 281 List of Tables 285 List of Listings 287 Appendices 28

    KPI-related monitoring, analysis, and adaptation of business processes

    Get PDF
    In today's companies, business processes are increasingly supported by IT systems. They can be implemented as service orchestrations, for example in WS-BPEL, running on Business Process Management (BPM) systems. A service orchestration implements a business process by orchestrating a set of services. These services can be arbitrary IT functionality, human tasks, or again service orchestrations. Often, these business processes are implemented as part of business-to-business collaborations spanning several participating organizations. Service choreographies focus on modeling how processes of different participants interact in such collaborations. An important aspect in BPM is performance management. Performance is measured in terms of Key Performance Indicators (KPIs), which reflect the achievement towards business goals. KPIs are based on domain-specific metrics typically reflecting the time, cost, and quality dimensions. Dealing with KPIs involves several phases, namely monitoring, analysis, and adaptation. In a first step, KPIs have to be monitored in order to evaluate the current process performance. In case monitoring shows negative results, there is a need for analyzing and understanding the reasons why KPI targets are not reached. Finally, after identifying the influential factors of KPIs, the processes have to be adapted in order to improve the performance. %The goal thereby is to enable these phases in an automated manner. This thesis presents an approach how KPIs can be monitored, analyzed, and used for adaptation of processes. The concrete contributions of this thesis are: (i) an approach for monitoring of processes and their KPIs in service choreographies; (ii) a KPI dependency analysis approach based on classification learning which enables explaining how KPIs depend on a set of influential factors; (iii) a runtime adaptation approach which combines monitoring and KPI analysis in order to enable proactive adaptation of processes for improving the KPI performance; (iv) a prototypical implementation and experiment-based evaluation.Die Ausführung von Geschäftsprozessen wird heute zunehmend durch IT-Systeme unterstützt und auf Basis einer serviceorientierten Architektur umgesetzt. Die Prozesse werden dabei häufig als Service Orchestrierungen implementiert, z.B. in WS-BPEL. Eine Service Orchestrierung interagiert mit Services, die automatisiert oder durch Menschen ausgeführt werden, und wird durch eine Prozessausführungsumgebung ausgeführt. Darüber hinaus werden Geschäftsprozesse oft nicht in Isolation ausgeführt sondern interagieren mit weiteren Geschäftsprozessen, z.B. als Teil von Business-to-Business Beziehungen. Die Interaktionen der Prozesse werden dabei in Service Choreographien modelliert. Ein wichtiger Aspekt des Geschäftsprozessmanagements ist die Optimierung der Prozesse in Bezug auf ihre Performance, die mit Hilfe von Key Performance Indicators (KPIs) gemessen wird. KPIs basieren auf Prozessmetriken, die typischerweise die Dimensionen Zeit, Kosten und Qualität abbilden, und evaluieren diese in Bezug auf die Erreichung von Unternehmenszielen. Die Optimierung der Prozesse in Bezug auf ihre KPIs umfasst mehrere Phasen. Im ersten Schritt müssen KPIs durch Monitoring der Prozesse zur Laufzeit erhoben werden. Falls die KPI Werte nicht zufriedenstellend sind, werden im nächsten Schritt die Faktoren analysiert, die die KPI Werte beeinflussen. Schließlich werden auf Basis dieser Analyse die Prozesse angepasst um die KPIs zu verbessern. In dieser Arbeit wird ein integrierter Ansatz für das Monitoring, die Analyse und automatisierte Adaption von Prozessen mit dem Ziel der Optimierung hinsichtlich der KPIs vorgestellt. Die Beiträge der Arbeit sind wie folgt: (i) ein Ansatz zum Monitoring von KPIs über einzelne Prozesse hinweg in Service Choreographien, (ii) ein Ansatz zur Analyse von beeinflussenden Faktoren von KPIs auf Basis von Entscheidungsbäumen, (iii) ein Ansatz zur automatisierten, proaktiven Adaption von Prozessen zur Laufzeit auf Basis des Monitorings und der KPI Analyse, (iv) eine prototypische Implementierung und experimentelle Evaluierung

    Temporal Models For History-Aware Explainability In Self-Adaptive Systems

    Get PDF
    The complexity of real-world problems requires modern software systems to be able to autonomously adapt and modify their behaviour at runtime to deal with unforeseen internal and external fluctuations and contexts. Consequently, these self-adaptive systems (SAS) can show unexpected and surprising behaviours which stakeholders may not understand or agree with. This may be exacerbated due to the ubiquity and complexity of Artificial Intelligence (AI) techniques which are often considered “black boxes” and are increasingly used by SAS. This thesis explores how synergies between model-driven engineering and runtime monitoring help to enable explanations based on SAS’ historical behaviour with the objective of promoting transparency and understandability in these types of systems. Specifically, this PhD work has studied how the use of runtime models extended with long-term memory can provide the abstraction, analysis and reasoning capabilities needed to support explanations when using AI-based SAS. For this purpose, this work argues that a system should i) offer access and retrieval of historical data about past behaviour, ii) track over time the reasons for its decision making, and iii) be able to convey this knowledge to different stakeholders as part of explanations for justifying its behaviour. Runtime models stored in Temporal Graph Databases, which result in Temporal Models (TMs), are proposed for tracking the decision-making history of SAS to support explanations. The approach enables explainability for interactive diagnosis (i.e. during execution) and forensic analysis (i.e. after the fact) based on the trajectory of the SAS execution. Furthermore, in cases where the resources are limited (e.g., storage capacity or time to response), the proposed architecture also integrates the runtime monitoring technique, complex event processing (CEP). CEP allows detecting matches to event patterns that need to be stored instead of keeping the entire history. The proposed architecture helps developers in gaining insights into SAS while they work on validating and improving their systems
    corecore