4 research outputs found

    Detecting Production Phases Based on Sensor Values using 1D-CNNs

    Get PDF
    In the context of Industry 4.0, the knowledge extraction from sensor information plays an important role. Often, information gathered from sensor values reveals meaningful insights for production levels, such as anomalies or machine states. In our use case, we identify production phases through the inspection of sensor values with the help of convolutional neural networks. The data set stems from a tempering furnace used for metal heat treating. Our supervised learning approach unveils a promising accuracy for the chosen neural network that was used for the detection of production phases. We consider solutions like shown in this work as salient pillars in the field of predictive maintenance

    XAI in the Context of Predictive Process Monitoring: An Empirical Analysis Framework

    No full text
    Predictive Process Monitoring (PPM) has been integrated into process mining use cases as a value-adding task. PPM provides useful predictions on the future of the running business processes with respect to different perspectives, such as the upcoming activities to be executed next, the final execution outcome, and performance indicators. In the context of PPM, Machine Learning (ML) techniques are widely employed. In order to gain trust of stakeholders regarding the reliability of PPM predictions, eXplainable Artificial Intelligence (XAI) methods have been increasingly used to compensate for the lack of transparency of most of predictive models. Multiple XAI methods exist providing explanations for almost all types of ML models. However, for the same data, as well as, under the same preprocessing settings or same ML models, generated explanations often vary significantly. Corresponding variations might jeopardize the consistency and robustness of the explanations and, subsequently, the utility of the corresponding model and pipeline settings. This paper introduces a framework that enables the analysis of the impact PPM-related settings and ML-model-related choices may have on the characteristics and expressiveness of the generated explanations. Our framework provides a means to examine explanations generated either for the whole reasoning process of an ML model, or for the predictions made on the future of a certain business process instance. Using well-defined experiments with different settings, we uncover how choices made through a PPM workflow affect and can be reflected through explanations. This framework further provides the means to compare how different characteristics of explainability methods can shape the resulting explanations and reflect on the underlying model reasoning process
    corecore