24 research outputs found
Event Log Sampling for Predictive Monitoring
Predictive process monitoring is a subfield of process mining that aims to
estimate case or event features for running process instances. Such predictions
are of significant interest to the process stakeholders. However,
state-of-the-art methods for predictive monitoring require the training of
complex machine learning models, which is often inefficient. This paper
proposes an instance selection procedure that allows sampling training process
instances for prediction models. We show that our sampling method allows for a
significant increase of training speed for next activity prediction methods
while maintaining reliable levels of prediction accuracy.Comment: 7 pages, 1 figure, 4 tables, 34 reference
Performance-preserving event log sampling for predictive monitoring
Predictive process monitoring is a subfield of process mining that aims to estimate case or event features for running process instances. Such predictions are of significant interest to the process stakeholders. However, most of the state-of-the-art methods for predictive monitoring require the training of complex machine learning models, which is often inefficient. Moreover, most of these methods require a hyper-parameter optimization that requires several repetitions of the training process which is not feasible in many real-life applications. In this paper, we propose an instance selection procedure that allows sampling training process instances for prediction models. We show that our instance selection procedure allows for a significant increase of training speed for next activity and remaining time prediction methods while maintaining reliable levels of prediction accuracy
THE WHEEL OF RETAILING IS STILL SPINNING: PROMISES, COMPROMISES, AND PITFALLS OF QUICK-COMMERCE
In the fast-paced retailing environment, consumers’ needs still drive purchase decisions. Yet, the increasing diffusion of new technologies and new business models suggests that shopping is on the verge of a quantum jump into a realm where innovations, driven by newer digital-native players, are changing the game of retailing and setting its future developments. Ultrafast deliverers fully belong to this category: with the defiant promise of delivering goods in less than 30 minutes, they are shifting the paradigms of e-commerce into the newborn construct of Quick Commerce (Q-Commerce). Since everything comes at a price, this paper provides a definition of Q-Commerce and discuss the challenges faced by ultrafast deliverers in keeping the wheel of retailing move forward but not without a few compromises to the retailing-mix
Business Process Text Sketch Automation Generation Using Large Language Model
Business Process Management (BPM) is gaining increasing attention as it has
the potential to cut costs while boosting output and quality. Business process
document generation is a crucial stage in BPM. However, due to a shortage of
datasets, data-driven deep learning techniques struggle to deliver the expected
results. We propose an approach to transform Conditional Process Trees (CPTs)
into Business Process Text Sketches (BPTSs) using Large Language Models (LLMs).
The traditional prompting approach (Few-shot In-Context Learning) tries to get
the correct answer in one go, and it can find the pattern of transforming
simple CPTs into BPTSs, but for close-domain and CPTs with complex hierarchy,
the traditional prompts perform weakly and with low correctness. We suggest
using this technique to break down a difficult CPT into a number of basic CPTs
and then solve each one in turn, drawing inspiration from the
divide-and-conquer strategy. We chose 100 process trees with depths ranging
from 2 to 5 at random, as well as CPTs with many nodes, many degrees of
selection, and cyclic nesting. Experiments show that our method can achieve a
correct rate of 93.42%, which is 45.17% better than traditional prompting
methods. Our proposed method provides a solution for business process document
generation in the absence of datasets, and secondly, it becomes potentially
possible to provide a large number of datasets for the process model extraction
(PME) domain.Comment: 10 pages, 7 figure
Large Language Models for Business Process Management: Opportunities and Challenges
Large language models are deep learning models with a large number of
parameters. The models made noticeable progress on a large number of tasks, and
as a consequence allowing them to serve as valuable and versatile tools for a
diverse range of applications. Their capabilities also offer opportunities for
business process management, however, these opportunities have not yet been
systematically investigated. In this paper, we address this research problem by
foregrounding various management tasks of the BPM lifecycle. We investigate six
research directions highlighting problems that need to be addressed when using
large language models, including usage guidelines for practitioners
A User Study on Explainable Online Reinforcement Learning for Adaptive Systems
Online reinforcement learning (RL) is increasingly used for realizing
adaptive systems in the presence of design time uncertainty. Online RL
facilitates learning from actual operational data and thereby leverages
feedback only available at runtime. However, Online RL requires the definition
of an effective and correct reward function, which quantifies the feedback to
the RL algorithm and thereby guides learning. With Deep RL gaining interest,
the learned knowledge is no longer explicitly represented, but is represented
as a neural network. For a human, it becomes practically impossible to relate
the parametrization of the neural network to concrete RL decisions. Deep RL
thus essentially appears as a black box, which severely limits the debugging of
adaptive systems. We previously introduced the explainable RL technique
XRL-DINE, which provides visual insights into why certain decisions were made
at important time points. Here, we introduce an empirical user study involving
54 software engineers from academia and industry to assess (1) the performance
of software engineers when performing different tasks using XRL-DINE and (2)
the perceived usefulness and ease of use of XRL-DINE.Comment: arXiv admin note: substantial text overlap with arXiv:2210.0593
Conformance Testing for Stochastic Cyber-Physical Systems
Conformance is defined as a measure of distance between the behaviors of two
dynamical systems. The notion of conformance can accelerate system design when
models of varying fidelities are available on which analysis and control design
can be done more efficiently. Ultimately, conformance can capture distance
between design models and their real implementations and thus aid in robust
system design. In this paper, we are interested in the conformance of
stochastic dynamical systems. We argue that probabilistic reasoning over the
distribution of distances between model trajectories is a good measure for
stochastic conformance. Additionally, we propose the non-conformance risk to
reason about the risk of stochastic systems not being conformant. We show that
both notions have the desirable transference property, meaning that conformant
systems satisfy similar system specifications, i.e., if the first model
satisfies a desirable specification, the second model will satisfy (nearly) the
same specification. Lastly, we propose how stochastic conformance and the
non-conformance risk can be estimated from data using statistical tools such as
conformal prediction. We present empirical evaluations of our method on an F-16
aircraft, an autonomous vehicle, a spacecraft, and Dubin's vehicle
The Role of Explainable AI in the Research Field of AI Ethics
Ethics of Artificial Intelligence (AI) is a growing research field that has emerged in response to the challenges related to AI. Transparency poses a key challenge for implementing AI ethics in practice. One solution to transparency issues is AI systems that can explain their decisions. Explainable AI (XAI) refers to AI systems that are interpretable or understandable to humans. The research fields of AI ethics and XAI lack a common framework and conceptualization. There is no clarity of the field’s depth and versatility. A systematic approach to understanding the corpus is needed. A systematic review offers an opportunity to detect research gaps and focus points. This paper presents the results of a systematic mapping study (SMS) of the research field of the Ethics of AI. The focus is on understanding the role of XAI and how the topic has been studied empirically. An SMS is a tool for performing a repeatable and continuable literature search. This paper contributes to the research field with a Systematic Map that visualizes what, how, when, and why XAI has been studied empirically in the field of AI ethics. The mapping reveals research gaps in the area. Empirical contributions are drawn from the analysis. The contributions are reflected on in regards to theoretical and practical implications. As the scope of the SMS is a broader research area of AI ethics the collected dataset opens possibilities to continue the mapping process in other directions.© 2023 Association for Computing Machinery.fi=vertaisarvioitu|en=peerReviewed