1,167 research outputs found
Automated repair of process models with non-local constraints using state-based region theory
State-of-the-art process discovery methods construct free-choice process models from event logs. Consequently, the constructed models do not take into account indirect dependencies between events. Whenever the input behaviour is not free-choice, these methods fail to provide a precise model. In this paper, we propose a novel approach for enhancing free-choice process models by adding non-free-choice constructs discovered a-posteriori via region-based techniques. This allows us to benefit from the performance of existing process discovery methods and the accuracy of the employed fundamental synthesis techniques. We prove that the proposed approach preserves fitness with respect to the event log while improving the precision when indirect dependencies exist. The approach has been implemented and tested on both synthetic and real-life datasets. The results show its effectiveness in repairing models discovered from event logs.This work was partly supported by the Australian Research Council Discovery Project DP180102839.
This work was supported by MINECO and FEDER funds under grant TIN2017-86727-C2-1-R.Peer ReviewedPostprint (author's final draft
Weakly Complete Event Logs in Process Mining
Many information systems have a possibility to record their execution, and, in this way, to generate a trace about events describing the real system behaviour. From behaviour example records in traces of the event log, the α-algorithm automatically generates a process model that belongs to a subclass of Petri nets, known as workflow nets. One of the basic limiting assumptions of α-algorithm is that the event log needs to be complete. As a result of attempting to overcome the problem of completeness of the event log, we introduced the notion of weakly complete event logs, from which our modified technique and algorithm can produce the same result as the α-algorithm from complete logs on parallel processes. Thereby weakly complete logs can be significantly smaller than complete logs, considering the number of traces they consist of. Weakly complete logs were used for the realization of our idea of interactive parallel business process model generation
Analysing business processes to manage and resolve strategic issues in a manufacturing business
This paper demonstrates the value of applying heuristics to knowledge systems of business processes in a manufacturing company to resolve strategic issues and enable the attainment of strategic business goals. The manufacturing company was losing market share through not being able to get its new products to market quickly enough. The research illustrates the ‘location’ and use of information systems in a manufacturing context. The researchers collected the specific business process knowledge in the company and developed a knowledge management system and then applied heuristics to the ‘AS IS’ manufacturing process to determine better models of manufacturing that would enable faster to market product development and thus enable better strategic alignment between company expectations and realisation of market share. The paper highlights the strategic use of information systems as a means of directly solving business problems.<br /
Analysis of Software Binaries for Reengineering-Driven Product Line Architecture\^aAn Industrial Case Study
This paper describes a method for the recovering of software architectures
from a set of similar (but unrelated) software products in binary form. One
intention is to drive refactoring into software product lines and combine
architecture recovery with run time binary analysis and existing clustering
methods. Using our runtime binary analysis, we create graphs that capture the
dependencies between different software parts. These are clustered into smaller
component graphs, that group software parts with high interactions into larger
entities. The component graphs serve as a basis for further software product
line work. In this paper, we concentrate on the analysis part of the method and
the graph clustering. We apply the graph clustering method to a real
application in the context of automation / robot configuration software tools.Comment: In Proceedings FMSPLE 2015, arXiv:1504.0301
Big continuous data: dealing with velocity by composing event streams
International audienceThe rate at which we produce data is growing steadily, thus creating even larger streams of continuously evolving data. Online news, micro-blogs, search queries are just a few examples of these continuous streams of user activities. The value of these streams relies in their freshness and relatedness to on-going events. Modern applications consuming these streams need to extract behaviour patterns that can be obtained by aggregating and mining statically and dynamically huge event histories. An event is the notification that a happening of interest has occurred. Event streams must be combined or aggregated to produce more meaningful information. By combining and aggregating them either from multiple producers, or from a single one during a given period of time, a limited set of events describing meaningful situations may be notified to consumers. Event streams with their volume and continuous production cope mainly with two of the characteristics given to Big Data by the 5V’s model: volume & velocity. Techniques such as complex pattern detection, event correlation, event aggregation, event mining and stream processing, have been used for composing events. Nevertheless, to the best of our knowledge, few approaches integrate different composition techniques (online and post-mortem) for dealing with Big Data velocity. This chapter gives an analytical overview of event stream processing and composition approaches: complex event languages, services and event querying systems on distributed logs. Our analysis underlines the challenges introduced by Big Data velocity and volume and use them as reference for identifying the scope and limitations of results stemming from different disciplines: networks, distributed systems, stream databases, event composition services, and data mining on traces
AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments
This report considers the application of Articial Intelligence (AI) techniques to
the problem of misuse detection and misuse localisation within telecommunications
environments. A broad survey of techniques is provided, that covers inter alia
rule based systems, model-based systems, case based reasoning, pattern matching,
clustering and feature extraction, articial neural networks, genetic algorithms, arti
cial immune systems, agent based systems, data mining and a variety of hybrid
approaches. The report then considers the central issue of event correlation, that
is at the heart of many misuse detection and localisation systems. The notion of
being able to infer misuse by the correlation of individual temporally distributed
events within a multiple data stream environment is explored, and a range of techniques,
covering model based approaches, `programmed' AI and machine learning
paradigms. It is found that, in general, correlation is best achieved via rule based approaches,
but that these suffer from a number of drawbacks, such as the difculty of
developing and maintaining an appropriate knowledge base, and the lack of ability
to generalise from known misuses to new unseen misuses. Two distinct approaches
are evident. One attempts to encode knowledge of known misuses, typically within
rules, and use this to screen events. This approach cannot generally detect misuses
for which it has not been programmed, i.e. it is prone to issuing false negatives.
The other attempts to `learn' the features of event patterns that constitute normal
behaviour, and, by observing patterns that do not match expected behaviour, detect
when a misuse has occurred. This approach is prone to issuing false positives,
i.e. inferring misuse from innocent patterns of behaviour that the system was not
trained to recognise. Contemporary approaches are seen to favour hybridisation,
often combining detection or localisation mechanisms for both abnormal and normal
behaviour, the former to capture known cases of misuse, the latter to capture
unknown cases. In some systems, these mechanisms even work together to update
each other to increase detection rates and lower false positive rates. It is concluded
that hybridisation offers the most promising future direction, but that a rule or state
based component is likely to remain, being the most natural approach to the correlation
of complex events. The challenge, then, is to mitigate the weaknesses of
canonical programmed systems such that learning, generalisation and adaptation
are more readily facilitated
Process Mining Handbook
This is an open access book. This book comprises all the single courses given as part of the First Summer School on Process Mining, PMSS 2022, which was held in Aachen, Germany, during July 4-8, 2022. This volume contains 17 chapters organized into the following topical sections: Introduction; process discovery; conformance checking; data preprocessing; process enhancement and monitoring; assorted process mining topics; industrial perspective and applications; and closing
- …