7 research outputs found

    Socioeconomic determinants of eating pattern of adolescent students in Mansoura, Egypt

    Get PDF
    Introduction During the last few decades, Egypt experienced rapid socio-cultural changes that were associated with major changes in the food choices and eating habits, which, progressively, becomes more westernized. The objective of this study was to investigate the meal patterns of secondary school adolescent students in Mansoura, Egypt. Methods This is a cross-sectional study conducted on 891 adolescent students. Thirty clusters were selected to cover both general and vocational public schools of both sexes in urban and rural areas. A self-administered questionnaire was used to collect data about sociodemographic features of the students and their families, as well as meal habits of students. Results About 46% of students eat three meals per day. About 72%, 93% and 95% of respondents consume breakfast, lunch and dinner on daily bases, respectively. Snacks were eaten daily by 34.1% of students. Eating always with the family was stated by the majority (62.5%) of students and taking home made sandwiches during school time was mentioned by 35.8% of students. On logistic regression socioeconomic status is the only predictor associated with daily intake of breakfast, lunch and dinner; with high likelihood of eating with the family and intake of school meal. Conclusion Students practice many faulty meal patterns. School-, family- and community-based interventions are timely needed to promote healthy eating habit in adolescents.Pan African Medical Journal 2012; 13:2

    CONDA-PM -- A Systematic Review and Framework for Concept Drift Analysis in Process Mining

    Get PDF
    Business processes evolve over time to adapt to changing business environments. This requires continuous monitoring of business processes to gain insights into whether they conform to the intended design or deviate from it. The situation when a business process changes while being analysed is denoted as Concept Drift. Its analysis is concerned with studying how a business process changes, in terms of detecting and localising changes and studying the effects of the latter. Concept drift analysis is crucial to enable early detection and management of changes, that is, whether to promote a change to become part of an improved process, or to reject the change and make decisions to mitigate its effects. Despite its importance, there exists no comprehensive framework for analysing concept drift types, affected process perspectives, and granularity levels of a business process. This article proposes the CONcept Drift Analysis in Process Mining (CONDA-PM) framework describing phases and requirements of a concept drift analysis approach. CONDA-PM was derived from a Systematic Literature Review (SLR) of current approaches analysing concept drift. We apply the CONDA-PM framework on current approaches to concept drift analysis and evaluate their maturity. Applying CONDA-PM framework highlights areas where research is needed to complement existing efforts.Comment: 45 pages, 11 tables, 13 figure

    Evaluating XAI methods in the context of predictive process monitoring

    No full text
    Predictive Process Monitoring (PPM) emerged as a value-adding use case of process mining. Capitalizing on the recent advances and growing adoption of machine learning techniques, PPM takes business process-related data (i.e., event logs) as input and utilizes machine learning techniques to train predictive models. At runtime, the trained models generate predictions about the future of currently executed processes. Examples of the predictions involve the next steps that will be executed, the resource that will be executing a particular upcoming step, performance-related information (e.g., the remaining time until the end of the execution), and the outcome of an ongoing execution. Performance improvements in machine learning techniques do not usually come for free. Notions of complexity and opaqueness are common labels of machine learning-based models. Having the business process stakeholders at the center of focus in PPM necessitates mitigating the consequences of the opaqueness associated with complex predictive models. Explainability tends to increase trust in the generated predictions and boost human interaction with predictive models as a result of increased understanding and transparency. Furthermore, explanations may be utilized to uncover potential problems resulting from the training of a predictive model on biased data and improve the performance of the predictive model. Several eXplainable Artificial Intelligence (XAI) methods have been proposed, but the right mechanisms to evaluate their application should be in place in order to apply them. However, evaluating XAI in the context of PPM is a difficult task, due to the lack of a shared and accepted definition of explainability and its associated characteristics and evaluation criteria. The contributions of this thesis include an analysis framework designed to systematically investigate the implications of applying different PPM techniques on explainability from different perspectives. As a second contribution, an approach to evaluate global explainability methods is proposed. This approach analyzes the consistency of explanations when compared with data-related facts extracted from business process data. As a final contribution, the thesis introduces an approach to assess the interpretability of explanations produced for specific predictions. In particular, the proposed approach considers rule-based explanations according to different interpretability-related criteria. The thesis further discusses the results and lessons learned using a number of experiments that follow different settings. As a merit of this research, all contributions were validated from a PPM perspective based on real-life process-related data

    Evaluating Explainable Artificial Intelligence Methods Based on Feature Elimination: A Functionality-Grounded Approach

    No full text
    Although predictions based on machine learning are reaching unprecedented levels of accuracy, understanding the underlying mechanisms of a machine learning model is far from trivial. Therefore, explaining machine learning outcomes is gaining more interest with an increasing need to understand, trust, justify, and improve both the predictions and the prediction process. This, in turn, necessitates providing mechanisms to evaluate explainability methods as well as to measure their ability to fulfill their designated tasks. In this paper, we introduce a technique to extract the most important features from a data perspective. We propose metrics to quantify the ability of an explainability method to convey and communicate the underlying concepts available in the data. Furthermore, we evaluate the ability of an eXplainable Artificial Intelligence (XAI) method to reason about the reliance of a Machine Learning (ML) model on the extracted features. Through experiments, we further, prove that our approach enables differentiating explainability methods independent of the underlying experimental settings. The proposed metrics can be used to functionally evaluate the extent to which an explainability method is able to extract the patterns discovered by a machine learning model. Our approach provides a means to quantitatively differentiate global explainability methods in order to deepen user trust not only in the predictions generated but also in their explanations

    Explainability of Predictive Process Monitoring Results: Can You See My Data Issues?

    No full text
    Predictive process monitoring (PPM) has been discussed as a use case of process mining for several years. PPM enables foreseeing the future of an ongoing business process by predicting, for example, relevant information on the way in which running processes terminate or on related process performance indicators. A large share of PPM approaches adopt Machine Learning (ML), taking advantage of the accuracy and precision of ML models. Consequently, PPM inherits the challenges of traditional ML approaches. One of these challenges concerns the need to gain user trust in the generated predictions. This issue is addressed by explainable artificial intelligence (XAI). However, in addition to ML characteristics, the choices made and the techniques applied in the context of PPM influence the resulting explanations. This necessitates the availability of a study on the effects of different choices made in the context of a PPM task on the explainability of the generated predictions. In order to address this gap, we systemically investigate the effects of different PPM settings on the data fed into an ML model and subsequently into the employed XAI method. We study how differences between the resulting explanations indicate several issues in the underlying data. Example of these issues include collinearity and high dimensionality of the input data. We construct a framework for performing a series of experiments to examine different choices of PPM dimensions (i.e., event logs, preprocessing configurations, and ML models), integrating XAI as a fundamental component. In addition to agreements, the experiments highlight several inconsistencies between data characteristics and important predictors used by the ML model on one hand, and explanations of predictions of the investigated ML model on the other

    Why should I trust your explanation? An evaluation approach for XAI methods applied to predictive process monitoring results

    No full text
    As a use case of process mining, predictive process monitoring (PPM) aims to provide information on the future course of running business process instances. A large number of available PPM approaches adopt predictive models based on machine learning (ML). With the improved efficiency and accuracy of ML models usually being coupled with increasing complexity, their understandability becomes compromised. Having the user at the center of attention, various eXplainable artificial intelligence (XAI) methods emerged to provide users with explanations of the reasoning process of an ML model. Though there is a growing interest in applying XAI methods to PPM results, various proposals have been made to evaluate explanations according to different criteria. In this article, we propose an approach to quantitatively evaluate XAI methods concerning their ability to reflect the facts learned from the underlying stores of business-related data, i.e., event logs. Our approach includes procedures to extract features that are crucial for generating predictions. Moreover, it computes ratios that have proven to be useful in differentiating XAI methods. We conduct experiments that produce useful insights into the effects of the various choices made through a PPM workflow. We can show that underlying data and model issues can be highlighted using the applied XAI methods. Furthermore, we could penalize and reward XAI methods for achieving certain levels of consistency with the facts learned about the underlying data. Our approach has been applied to different real-life event logs using different configurations of the PPM workflow. Impact Statement—As ML models are used to generate predictions for running business process instances, the outcomes of these models should be justifiable to users. To achieve this, explanation methods are applied on top of ML models. However, explanations need to be evaluated concerning the valuable information they convey about the predictive model and their ability to encode underlying data facts. In other words, an explainability method should be evaluated concerning its ability to match model inputs to its outputs. Our approach provides a means to evaluate and compare explainability methods concerned with the global explainability of the entire reasoning process of an ML model. Based on experimental settings, where each step of the PPM workflow is changeable, we could study the ability of our approach to evaluate different combinations of data, preprocessing configurations, modeling, and explanation methods. This approach allows an understanding of which PPM workflow configurations increase the ability of an explanation method to make the prediction process transparent to users

    Evaluating explainable artificial intelligence methods based on feature elimination: a functionality-grounded approach

    No full text
    Although predictions based on machine learning are reaching unprecedented levels of accuracy, understanding the underlying mechanisms of a machine learning model is far from trivial. Therefore, explaining machine learning outcomes is gaining more interest with an increasing need to understand, trust, justify, and improve both the predictions and the prediction process. This, in turn, necessitates providing mechanisms to evaluate explainability methods as well as to measure their ability to fulfill their designated tasks. In this paper, we introduce a technique to extract the most important features from a data perspective. We propose metrics to quantify the ability of an explainability method to convey and communicate the underlying concepts available in the data. Furthermore, we evaluate the ability of an eXplainable Artificial Intelligence (XAI) method to reason about the reliance of a Machine Learning (ML) model on the extracted features. Through experiments, we further, prove that our approach enables differentiating explainability methods independent of the underlying experimental settings. The proposed metrics can be used to functionally evaluate the extent to which an explainability method is able to extract the patterns discovered by a machine learning model. Our approach provides a means to quantitatively differentiate global explainability methods in order to deepen user trust not only in the predictions generated but also in their explanations
    corecore