16 research outputs found

    Process Mining for Advanced Service Analytics – From Process Efficiency to Customer Encounter and Experience

    Get PDF
    With the ongoing trend of servitization nurtured through digital technologies, the analysis of services as a starting point for improvement is gaining more and more importance. Service analytics has been defined as a concept to analyze the data generated during service execution to create value for providers and customers. To create more useful insights from the data, there is a continuous need for more advanced solutions for service analytics. One promising technology is process mining which has its origins in business process management. Our work provides insights into how process mining is currently used to analyze service processes and how it could be used along the service process. We find that process mining is increasingly applied for the analysis of the providers' internal operations, but more emphasis should be put on analyzing the customer interaction and experience

    A next click recommender system for web-based service analytics with context-aware LSTMs

    Get PDF
    Software companies that offer web-based services instead of local installations can record the user’s interactions with the system from a distance. This data can be analyzed and subsequently improved or extended. A recommender system that guides users through a business process by suggesting next clicks can help to improve user satisfaction, and hence service quality and can reduce support costs. We present a technique for a next click recommender system. Our approach is adapted from the predictive process monitoring domain that is based on long short-term memory (LSTM) neural networks. We compare three different configurations of the LSTM technique: LSTM without context, LSTM with context, and LSTM with embedded context. The technique was evaluated with a real-life data set from a financial software provider. We used a hidden Markov model (HMM) as the baseline. The configuration LSTM with embedded context achieved a significantly higher accuracy and the lowest standard deviation

    GAM(e) changer or not? An evaluation of interpretable machine learning models based on additive model constraints

    Get PDF
    The number of information systems (IS) studies dealing with explainable artificial intelligence (XAI) is currently exploding as the field demands more transparency about the internal decision logic of machine learning (ML) models. However, most techniques subsumed under XAI provide post-hoc-analytical explanations, which have to be considered with caution as they only use approximations of the underlying ML model. Therefore, our paper investigates a series of intrinsically interpretable ML models and discusses their suitability for the IS community. More specifically, our focus is on advanced extensions of generalized additive models (GAM) in which predictors are modeled independently in a non-linear way to generate shape functions that can capture arbitrary patterns but remain fully interpretable. In our study, we evaluate the prediction qualities of five GAMs as compared to six traditional ML models and assess their visual outputs for model interpretability. On this basis, we investigate their merits and limitations and derive design implications for further improvements

    Best of Both Worlds: Combining Predictive Power with Interpretable and Explainable Results for Patient Pathway Prediction

    Get PDF
    Proactively analyzing patient pathways can help healthcare providers to anticipate treatment-related risks, detect undesired outcomes, and allocate resources quickly. For this purpose, modern methods from the field of predictive business process monitoring can be applied to create data-driven models that capture patterns from past behavior to provide predictions about running process instances. Recent methods increasingly focus on deep neural networks (DNN) due to their superior prediction performances and their independence from process knowledge. However, DNNs generally have the disadvantage of showing black-box characteristics, which hampers the dissemination in critical environments such as healthcare. To this end, we propose the design of HIXPred, a novel artifact combining predictive power with explainable results for patient pathway predictions. We instantiate HIXPred and apply it to a real-life healthcare use case for evaluation and demonstration purposes and conduct interviews with medical experts. Our results confirm high predictive performance while ensuring sufficient interpretability and explainability to provide comprehensible decision support
    corecore