288 research outputs found

    On the use of domain knowledge for process model repair

    Get PDF
    Process models are important for supporting organizations in documenting, understanding and monitoring their business. When these process models become outdated, they need to be revised to accurately describe the new status quo of the processes in the organization. Process model repair techniques help at automatically revising the existing model from behavior traced in event logs. So far, such techniques have focused on identifying which parts of the model to change and how to change them, but they do not use knowledge from practitioners to inform the revision. As a consequence, fragments of the model may change in a way that defies existing regulations or represents outdated information that was wrongly considered from the event log. This paper uses concepts from theory revision to provide formal foundations for process model repair that exploits domain knowledge. Specifically, it conceptualizes (1) what are unchangeable fragments in the model and (2) the role that various traces in the event log should play when it comes to model repair. A scenario of use is presented that demonstrates the benefits of this conceptualization. The current state of existing process model repair techniques is compared against the proposed concepts. The results show that only two existing techniques partially consider the concepts presented in this paper for model repair.Peer Reviewe

    A Semantic Framework for Declarative and Procedural Knowledge

    Get PDF
    In any scientic domain, the full set of data and programs has reached an-ome status, i.e. it has grown massively. The original article on the Semantic Web describes the evolution of a Web of actionable information, i.e.\ud information derived from data through a semantic theory for interpreting the symbols. In a Semantic Web, methodologies are studied for describing, managing and analyzing both resources (domain knowledge) and applications (operational knowledge) - without any restriction on what and where they\ud are respectively suitable and available in the Web - as well as for realizing automatic and semantic-driven work\ud ows of Web applications elaborating Web resources.\ud This thesis attempts to provide a synthesis among Semantic Web technologies, Ontology Research, Knowledge and Work\ud ow Management. Such a synthesis is represented by Resourceome, a Web-based framework consisting of two components which strictly interact with each other: an ontology-based and domain-independent knowledge manager system (Resourceome KMS) - relying on a knowledge model where resource and operational knowledge are contextualized in any domain - and a semantic-driven work ow editor, manager and agent-based execution system (Resourceome WMS).\ud The Resourceome KMS and the Resourceome WMS are exploited in order to realize semantic-driven formulations of work\ud ows, where activities are semantically linked to any involved resource. In the whole, combining the use of domain ontologies and work ow techniques, Resourceome provides a exible domain and operational knowledge organization, a powerful engine for semantic-driven work\ud ow composition, and a distributed, automatic and\ud transparent environment for work ow execution

    Structural information aided automated test method for magic 4GL

    Get PDF
    Nowadays testing data intensive, GUI enhanced applications properly on an easily maintainable way has become a more crucial part of the application life cycle. There are many evolving technologies to support automatized GUI testing in various environments. However there are hardly any methods that support 4GLs, especially Magic. Fortunately, the characteristics of the 4th generation of programming languages - like explicit definition of the relations between GUI elements and data - eliminate most of the problems raised during testing the GUI in the case of 3GLs. By utilizing these advantages we were able to develop a generalized testing method that supports 4GLs, and as a proof of concept a system for testing Magic xpa applications was built. In this paper, a generalized testing method for 4GLs, our path and script generator algorithms, and their implementations for Magic xpa applications are presented. In addition, the cooperation of these components with existing solutions is demonstrated, and a test method that has been competed by the application of our tools (and which is an instantiation of the generalized method) is introduced as a possible use of the results

    Comparative process mining:analyzing variability in process data

    Get PDF

    Assessing BPM’s role in a digital innovation project

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementThe world is changing. In the digitalization era, digital devices are everywhere, enabled by the quick proliferation of smart and connected products. The transformation we are witnessing is not only about the new digital artefacts, but also includes the alignment of the operations, business processes, strategy and organizational, and IT structures, resulting in the so-called maturity. Although it might not be trivial, this increased efficiency is closely connected with the processes, of how to create opportunities for optimizing and redesigning them. However, the combination of digital innovation and business process management, and how one benefits the other, is not very explored in the literature, which constitutes a research gap. Given this, the importance of business process management practices and their relationship with the remaining organisation’s dimensions was studied and assessed through a comprehensive and systematic literature review. Hence, insights were gathered to create a framework that allows answering the research question “What is the BPM’s role in a digital innovation project?”. It was expected to understand the challenges associated with digital transformation, what core requirements are the most valuable, and what is the role of process management in all of it. A focus group has confirmed the usefulness of the artefact, by showing the correlation between the different elements in scope and allowing an understanding of the capabilities needed in the organisation. Nonetheless, the feedback suggested the adaptation of the framework to include a maturity assessment pre-stage and cost evaluation per digital transformation category, so it can be completely transversal to all types of organisations and all budgets

    Comparative process mining:analyzing variability in process data

    Get PDF

    Process-mining-enabled audit of information systems: Methodology and an application

    Get PDF
    Current methodologies for Information Systems (ISs) audits suffer from some limitations that could question the effectiveness of such procedures in detecting deviations, frauds, or abuses. Process Mining (PM), a set of business-process-related diagnostic and improvement techniques, can tackle these weaknesses, but literature lacks contributions that address this possibility concretely. Thus, by framing PM as an Expert System (ES) engine, this paper presents a five-step PM-based methodology for IS audits and validates it through a case in a freight export port process managed by a Port Community System (PCS), an open electronic platform enabling information exchange among port stakeholders. The validation pointed out some advantages (e.g. depth of analysis, easier automation, less invasiveness) of our PM-enabled methodology over extant ESs and tools for IS audit. The substantive test and the check on the PCS processing controls and output controls allowed to identify four major non-conformances likely implying both legal and operational risks, and two unforeseen process deviations that were not known by the port authority, but that could improve the flexibility of the process. These outcomes set the stage for an export process reengineering, and for revising the boundaries in the process flow of the PCS

    Semantically defined Analytics for Industrial Equipment Diagnostics

    Get PDF
    In this age of digitalization, industries everywhere accumulate massive amount of data such that it has become the lifeblood of the global economy. This data may come from various heterogeneous systems, equipment, components, sensors, systems and applications in many varieties (diversity of sources), velocities (high rate of changes) and volumes (sheer data size). Despite significant advances in the ability to collect, store, manage and filter data, the real value lies in the analytics. Raw data is meaningless, unless it is properly processed to actionable (business) insights. Those that know how to harness data effectively, have a decisive competitive advantage, through raising performance by making faster and smart decisions, improving short and long-term strategic planning, offering more user-centric products and services and fostering innovation. Two distinct paradigms in practice can be discerned within the field of analytics: semantic-driven (deductive) and data-driven (inductive). The first emphasizes logic as a way of representing the domain knowledge encoded in rules or ontologies and are often carefully curated and maintained. However, these models are often highly complex, and require intensive knowledge processing capabilities. Data-driven analytics employ machine learning (ML) to directly learn a model from the data with minimal human intervention. However, these models are tuned to trained data and context, making it difficult to adapt. Industries today that want to create value from data must master these paradigms in combination. However, there is great need in data analytics to seamlessly combine semantic-driven and data-driven processing techniques in an efficient and scalable architecture that allows extracting actionable insights from an extreme variety of data. In this thesis, we address these needs by providing: • A unified representation of domain-specific and analytical semantics, in form of ontology models called TechOnto Ontology Stack. It is highly expressive, platform-independent formalism to capture conceptual semantics of industrial systems such as technical system hierarchies, component partonomies etc and its analytical functional semantics. • A new ontology language Semantically defined Analytical Language (SAL) on top of the ontology model that extends existing DatalogMTL (a Horn fragment of Metric Temporal Logic) with analytical functions as first class citizens. • A method to generate semantic workflows using our SAL language. It helps in authoring, reusing and maintaining complex analytical tasks and workflows in an abstract fashion. • A multi-layer architecture that fuses knowledge- and data-driven analytics into a federated and distributed solution. To our knowledge, the work in this thesis is one of the first works to introduce and investigate the use of the semantically defined analytics in an ontology-based data access setting for industrial analytical applications. The reason behind focusing our work and evaluation on industrial data is due to (i) the adoption of semantic technology by the industries in general, and (ii) the common need in literature and in practice to allow domain expertise to drive the data analytics on semantically interoperable sources, while still harnessing the power of analytics to enable real-time data insights. Given the evaluation results of three use-case studies, our approach surpass state-of-the-art approaches for most application scenarios.Im Zeitalter der Digitalisierung sammeln die Industrien überall massive Daten-mengen, die zum Lebenselixier der Weltwirtschaft geworden sind. Diese Daten können aus verschiedenen heterogenen Systemen, Geräten, Komponenten, Sensoren, Systemen und Anwendungen in vielen Varianten (Vielfalt der Quellen), Geschwindigkeiten (hohe Änderungsrate) und Volumina (reine Datengröße) stammen. Trotz erheblicher Fortschritte in der Fähigkeit, Daten zu sammeln, zu speichern, zu verwalten und zu filtern, liegt der eigentliche Wert in der Analytik. Rohdaten sind bedeutungslos, es sei denn, sie werden ordnungsgemäß zu verwertbaren (Geschäfts-)Erkenntnissen verarbeitet. Wer weiß, wie man Daten effektiv nutzt, hat einen entscheidenden Wettbewerbsvorteil, indem er die Leistung steigert, indem er schnellere und intelligentere Entscheidungen trifft, die kurz- und langfristige strategische Planung verbessert, mehr benutzerorientierte Produkte und Dienstleistungen anbietet und Innovationen fördert. In der Praxis lassen sich im Bereich der Analytik zwei unterschiedliche Paradigmen unterscheiden: semantisch (deduktiv) und Daten getrieben (induktiv). Die erste betont die Logik als eine Möglichkeit, das in Regeln oder Ontologien kodierte Domänen-wissen darzustellen, und wird oft sorgfältig kuratiert und gepflegt. Diese Modelle sind jedoch oft sehr komplex und erfordern eine intensive Wissensverarbeitung. Datengesteuerte Analysen verwenden maschinelles Lernen (ML), um mit minimalem menschlichen Eingriff direkt ein Modell aus den Daten zu lernen. Diese Modelle sind jedoch auf trainierte Daten und Kontext abgestimmt, was die Anpassung erschwert. Branchen, die heute Wert aus Daten schaffen wollen, müssen diese Paradigmen in Kombination meistern. Es besteht jedoch ein großer Bedarf in der Daten-analytik, semantisch und datengesteuerte Verarbeitungstechniken nahtlos in einer effizienten und skalierbaren Architektur zu kombinieren, die es ermöglicht, aus einer extremen Datenvielfalt verwertbare Erkenntnisse zu gewinnen. In dieser Arbeit, die wir auf diese Bedürfnisse durch die Bereitstellung: • Eine einheitliche Darstellung der Domänen-spezifischen und analytischen Semantik in Form von Ontologie Modellen, genannt TechOnto Ontology Stack. Es ist ein hoch-expressiver, plattformunabhängiger Formalismus, die konzeptionelle Semantik industrieller Systeme wie technischer Systemhierarchien, Komponenten-partonomien usw. und deren analytische funktionale Semantik zu erfassen. • Eine neue Ontologie-Sprache Semantically defined Analytical Language (SAL) auf Basis des Ontologie-Modells das bestehende DatalogMTL (ein Horn fragment der metrischen temporären Logik) um analytische Funktionen als erstklassige Bürger erweitert. • Eine Methode zur Erzeugung semantischer workflows mit unserer SAL-Sprache. Es hilft bei der Erstellung, Wiederverwendung und Wartung komplexer analytischer Aufgaben und workflows auf abstrakte Weise. • Eine mehrschichtige Architektur, die Wissens- und datengesteuerte Analysen zu einer föderierten und verteilten Lösung verschmilzt. Nach unserem Wissen, die Arbeit in dieser Arbeit ist eines der ersten Werke zur Einführung und Untersuchung der Verwendung der semantisch definierten Analytik in einer Ontologie-basierten Datenzugriff Einstellung für industrielle analytische Anwendungen. Der Grund für die Fokussierung unserer Arbeit und Evaluierung auf industrielle Daten ist auf (i) die Übernahme semantischer Technologien durch die Industrie im Allgemeinen und (ii) den gemeinsamen Bedarf in der Literatur und in der Praxis zurückzuführen, der es der Fachkompetenz ermöglicht, die Datenanalyse auf semantisch inter-operablen Quellen voranzutreiben, und nutzen gleichzeitig die Leistungsfähigkeit der Analytik, um Echtzeit-Daten-einblicke zu ermöglichen. Aufgrund der Evaluierungsergebnisse von drei Anwendungsfällen Übertritt unser Ansatz für die meisten Anwendungsszenarien Modernste Ansätze

    The 9th Conference of PhD Students in Computer Science

    Get PDF
    • …
    corecore