25 research outputs found

    Incorporating spatial context into remaining-time predictive process monitoring

    Get PDF
    Predictive business process monitoring aims to accurately predict a variable of interest (e.g. remaining time) or the future state of the process instance (e.g. outcome or next step). It is an important topic both from a research and practitioner perspective. For example, existing research suggests that even when problems occur with service provision, providing accurate estimates around process completion time is positively correlated with increasing customer satisfaction. The quest for models with higher predictive power has led to the development of a variety of novel techniques. However, though the location of events is a crucial explanatory variable in many business processes, as yet there have been no studies which have incorporated spatial context into the predictive process monitoring framework. This paper seeks to address this problem by introducing the concept of a spatial event log which records location details at a trace or event level. The predictive utility of spatial contextual features is evaluated vis-à-vis other contextual features. An approach is proposed to predict the remaining time of an in-flight process instance by calculating the buffer distances between the location of events in a spatial event log to capture spatial proximity and connectedness. These distances are subsequently utilised to construct a regression model which is then used to predict the remaining time for events in the test dataset. The proposed approach is benchmarked against existing approaches using five real-life event logs and demonstrates that spatial features improve the predictive power of business process monitoring models

    A Conceptual Framework to Predict Disease Progressions in Patients with Chronic Kidney Disease, Using Machine Learning and Process Mining

    Get PDF
    Process Mining is a technique looking into the analysis and mining of existing process flow. On the other hand, Machine Learning is a data science field and a sub-branch of Artificial Intelligence with the main purpose of replicating human behavior through algorithms. The separate application of Process Mining and Machine Learning for healthcare purposes has been widely explored with a various number of published works discussing their use. However, the simultaneous application of Process Mining and Machine Learning algorithms is still a growing field with ongoing studies on its application. This paper proposes a feasible framework where Process Mining and Machine Learning can be used in combination within the healthcare environment

    A Conceptual Framework to Predict Mental Health Patients' Zoning Classification.

    Get PDF
    Zoning classification is a rating mechanism, which uses a three-tier color coding to indicate perceived risk from the patients' conditions. It is a widely adopted manual system used across mental health settings, however it is time consuming and costly. We propose to automate classification, by adopting a hybrid approach, which combines Temporal Abstraction to capture the temporal relationship between symptoms and patients' behaviors, Natural Language Processing to quantify statistical information from patient notes, and Supervised Machine Learning Models to make a final prediction of zoning classification for mental health patients

    A decision support tool for health service re-design

    Get PDF
    Many of the outpatient services are currently only available in hospitals, however there are plans to provide some of these services alongside with General Practitioners. Consequently, General Practitioners could soon be based at polyclinics. These changes have caused a number of concerns to Hounslow Primary Care Trust (PCT). For example, which of the outpatient services are to be shifted from the hospital to the polyclinic? What are the current and expected future demands for these services? To tackle some of these concerns, the first phase of this project explores the set of specialties that are frequently visited in a sequence (using sequential association rules). The second phase develops an Excel based spreadsheet tool to compute the current and expected future demands for the selected specialties. From the sequential association rule algorithm, endocrinology and ophthalmology were found to be highly associated (i.e. frequently visited in a sequence), which means that these two specialties could easily be shifted from the hospital environment to the polyclinic. We illustrated the Excel based spreadsheet tool for endocrinology and ophthalmology, however, the model is generic enough to cope with other specialties, provided that the data are available

    Towards a threshold climate for emergency lower respiratory hospital admissions

    Get PDF
    Identification of ‘cut-points’ or thresholds of climate factors would play a crucial role in alerting risks of climate change and providing guidance to policymakers. This study investigated a ‘Climate Threshold’ for emergency hospital admissions of chronic lower respiratory diseases by using a distributed lag non-linear model (DLNM). We analysed a unique longitudinal dataset (10 years, 2000–2009) on emergency hospital admissions, climate, and pollution factors for the Greater London. Our study extends existing work on this topic by considering non-linearity, lag effects between climate factors and disease exposure within the DLNM model considering B-spline as smoothing technique. The final model also considered natural cubic splines of time since exposure and ‘day of the week’ as confounding factors. The results of DLNM indicated a significant improvement in model fitting compared to a typical GLM model. The final model identified the thresholds of several climate factors including: high temperature (≥≥27 °C), low relative humidity (≤≤ 40%), high Pm10 level (≥≥70-µg/m3), low wind speed (≤≤ 2 knots) and high rainfall (≥≥30 mm). Beyond the threshold values, a significantly higher number of emergency admissions due to lower respiratory problems would be expected within the following 2–3 days after the climate shift in the Greater London. The approach will be useful to initiate ‘region and disease specific’ climate mitigation plans. It will help identify spatial hot spots and the most sensitive areas and population due to climate change, and will eventually lead towards a diversified health warning system tailored to specific climate zones and populations

    A structured review of long-term care demand modelling

    Get PDF
    Long-term care (LTC) represents a significant and substantial proportion of healthcare spends across the globe. Its main aim is to assist individuals suffering with more or more chronic illnesses, disabilities or cognitive impairments, to carry out activities associated with daily living. Shifts in several economic, demographic and social factors have raised concerns surrounding the sustainability of current systems of LTC. Substantial effort has been put into modelling the LTC demand process itself so as to increase understanding of the factors driving demand for LTC and its related services. Furthermore, such modeling efforts have also been used to plan the operation and future composition of the LTC system itself. The main aim of this paper is to provide a structured review of the literature surrounding LTC demand modeling and any such industrial application, whilst highlighting any potential direction for future researchers

    Development and Optimization of a Machine-Learning Prediction Model for Acute Desquamation After Breast Radiation Therapy in the Multicenter REQUITE Cohort.

    Get PDF
    Some patients with breast cancer treated by surgery and radiation therapy experience clinically significant toxicity, which may adversely affect cosmesis and quality of life. There is a paucity of validated clinical prediction models for radiation toxicity. We used machine learning (ML) algorithms to develop and optimise a clinical prediction model for acute breast desquamation after whole breast external beam radiation therapy in the prospective multicenter REQUITE cohort study. Using demographic and treatment-related features (m = 122) from patients (n = 2058) at 26 centers, we trained 8 ML algorithms with 10-fold cross-validation in a 50:50 random-split data set with class stratification to predict acute breast desquamation. Based on performance in the validation data set, the logistic model tree, random forest, and naïve Bayes models were taken forward to cost-sensitive learning optimisation. One hundred and ninety-two patients experienced acute desquamation. Resampling and cost-sensitive learning optimisation facilitated an improvement in classification performance. Based on maximising sensitivity (true positives), the "hero" model was the cost-sensitive random forest algorithm with a false-negative: false-positive misclassification penalty of 90:1 containing m = 114 predictive features. Model sensitivity and specificity were 0.77 and 0.66, respectively, with an area under the curve of 0.77 in the validation cohort. ML algorithms with resampling and cost-sensitive learning generated clinically valid prediction models for acute desquamation using patient demographic and treatment features. Further external validation and inclusion of genomic markers in ML prediction models are worthwhile, to identify patients at increased risk of toxicity who may benefit from supportive intervention or even a change in treatment plan. [Abstract copyright: © 2022 The Authors.

    An Exploration of Ethical Decision Making with Intelligence Augmentation

    No full text
    In recent years, the use of Artificial Intelligence agents to augment and enhance the operational decision making of human agents has increased. This has delivered real benefits in terms of improved service quality, delivery of more personalised services, reduction in processing time, and more efficient allocation of resources, amongst others. However, it has also raised issues which have real-world ethical implications such as recommending different credit outcomes for individuals who have an identical financial profile but different characteristics (e.g., gender, race). The popular press has highlighted several high-profile cases of algorithmic discrimination and the issue has gained traction. While both the fields of ethical decision making and Explainable AI (XAI) have been extensively researched, as yet we are not aware of any studies which have examined the process of ethical decision making with Intelligence augmentation (IA). We aim to address that gap with this study. We amalgamate the literature in both fields of research and propose, but not attempt to validate empirically, propositions and belief statements based on the synthesis of the existing literature, observation, logic, and empirical analogy. We aim to test these propositions in future studies
    corecore