60 research outputs found

    A Machine Learning Method for Predicting Liver Transplant Survival Outcomes

    Get PDF
    For years, doctors have utilized the Model for End-stage Liver Disease (MELD) score to aid in the allocation of organs for liver transplants (LT). A major issue with using the MELD score to allocate organs for transplantation is that the MELD score does not accurately predict post-transplant survival. This research project aims to investigate the use of machine learning (ML) methods to predict LT survival using the newer Scientific Registry of Transplant Recipients (SRTR) dataset. For this project, death and nonfatal graft failure were treated equally as both cases result in a loss of a donated organ. The ML algorithms used in this project were provided by both the Weka and Orange software packages. Initial trials investigated a binary classification of patients based on whether they survived for three years post-transplant and primarily utilized a random forest algorithm. Later trials moved to a multi-class classification using both random forest and other classifier algorithms. Initial results from the three-year binary classification seemed promising but performance metrics failed to improve with continued work. All multi-class trials performed similarly using various classifier algorithms. Unexpectedly, the class for 12-year survival showed a promising increase in its area under the receiver operating characteristic curve. The results of this project help to create a baseline for future ML studies utilizing the SRTR dataset and will hopefully spur further research into liver transplant survival prediction

    Agrupamiento, predicción y clasificación ordinal para series temporales utilizando técnicas de machine learning: aplicaciones

    Get PDF
    In the last years, there has been an increase in the number of fields improving their standard processes by using machine learning (ML) techniques. The main reason for this is that the vast amount of data generated by these processes is difficult to be processed by humans. Therefore, the development of automatic methods to process and extract relevant information from these data processes is of great necessity, giving that these approaches could lead to an increase in the economic benefit of enterprises or to a reduction in the workload of some current employments. Concretely, in this Thesis, ML approaches are applied to problems concerning time series data. Time series is a special kind of data in which data points are collected chronologically. Time series are present in a wide variety of fields, such as atmospheric events or engineering applications. Besides, according to the main objective to be satisfied, there are different tasks in the literature applied to time series. Some of them are those on which this Thesis is mainly focused: clustering, classification, prediction and, in general, analysis. Generally, the amount of data to be processed is huge, arising the need of methods able to reduce the dimensionality of time series without decreasing the amount of information. In this sense, the application of time series segmentation procedures dividing the time series into different subsequences is a good option, given that each segment defines a specific behaviour. Once the different segments are obtained, the use of statistical features to characterise them is an excellent way to maximise the information of the time series and simultaneously reducing considerably their dimensionality. In the case of time series clustering, the objective is to find groups of similar time series with the idea of discovering interesting patterns in time series datasets. In this Thesis, we have developed a novel time series clustering technique. The aim of this proposal is twofold: to reduce as much as possible the dimensionality and to develop a time series clustering approach able to outperform current state-of-the-art techniques. In this sense, for the first objective, the time series are segmented in order to divide the them identifying different behaviours. Then, these segments are projected into a vector of statistical features aiming to reduce the dimensionality of the time series. Once this preprocessing step is done, the clustering of the time series is carried out, with a significantly lower computational load. This novel approach has been tested on all the time series datasets available in the University of East Anglia and University of California Riverside (UEA/UCR) time series classification (TSC) repository. Regarding time series classification, two main paths could be differentiated: firstly, nominal TSC, which is a well-known field involving a wide variety of proposals and transformations applied to time series. Concretely, one of the most popular transformation is the shapelet transform (ST), which has been widely used in this field. The original method extracts shapelets from the original time series and uses them for classification purposes. Nevertheless, the full enumeration of all possible shapelets is very time consuming. Therefore, in this Thesis, we have developed a hybrid method that starts with the best shapelets extracted by using the original approach with a time constraint and then tunes these shapelets by using a convolutional neural network (CNN) model. Secondly, time series ordinal classification (TSOC) is an unexplored field beginning with this Thesis. In this way, we have adapted the original ST to the ordinal classification (OC) paradigm by proposing several shapelet quality measures taking advantage of the ordinal information of the time series. This methodology leads to better results than the state-of-the-art TSC techniques for those ordinal time series datasets. All these proposals have been tested on all the time series datasets available in the UEA/UCR TSC repository. With respect to time series prediction, it is based on estimating the next value or values of the time series by considering the previous ones. In this Thesis, several different approaches have been considered depending on the problem to be solved. Firstly, the prediction of low-visibility events produced by fog conditions is carried out by means of hybrid autoregressive models (ARs) combining fixed-size and dynamic windows, adapting itself to the dynamics of the time series. Secondly, the prediction of convective cloud formation (which is a highly imbalance problem given that the number of convective cloud events is much lower than that of non-convective situations) is performed in two completely different ways: 1) tackling the problem as a multi-objective classification task by the use of multi-objective evolutionary artificial neural networks (MOEANNs), in which the two conflictive objectives are accuracy of the minority class and the global accuracy, and 2) tackling the problem from the OC point of view, in which, in order to reduce the imbalance degree, an oversampling approach is proposed along with the use of OC techniques. Thirdly, the prediction of solar radiation is carried out by means of evolutionary artificial neural networks (EANNs) with different combinations of basis functions in the hidden and output layers. Finally, the last challenging problem is the prediction of energy flux from waves and tides. For this, a multitask EANN has been proposed aiming to predict the energy flux at several prediction time horizons (from 6h to 48h). All these proposals and techniques have been corroborated and discussed according to physical and atmospheric models. The work developed in this Thesis is supported by 11 JCR-indexed papers in international journals (7 Q1, 3 Q2, 1 Q3), 11 papers in international conferences, and 4 papers in national conferences

    Data Mining-based Survival Analysis and Simulation Modeling for Lung Transplant

    Get PDF
    The objective of this research is to develop a decision support methodology for the lung transplant procedure by investigating the UNOS nation-wide dataset via data mining-based survival analysis and simulation-based optimization. Traditional statistical techniques have various limitations which hinder the exploration of the information hidden under the voluminous data. The deployment of the structural equation modeling integrated with decision trees provides a more effective matching between the donor organ and the recipient. Such an integration preceded by powerful data mining models to determine which variables to include for survival analysis is validated via the simulation-based optimization.The suggested data mining-based survival analysis was superior to the conventional statistical methods in predicting the lung graft survivability and in determining the critical variables to include in organ matching and allocation. The proposed matching index derived via structural equation model-based decision trees was validated to be a more effective priority-ranking mechanism than the current lung allocation scoring system. This validation was established by a simulation-based optimization model. It was demonstrated that with this novel matching index, a substantial improvement was achieved in the survival rate while only a short delay was caused in the average waiting time of candidate patients on the list. Furthermore, via the response surface methodology-based simulation optimization the optimal weighting scheme for the components of the novel matching index was determined by jointly optimizing the lung transplant performance measures, namely, the justice principle in terms of the waiting time and the utility principle in terms of the survival rate. The study presents uniqueness in that it provides a means to integrate the data mining modeling as well as simulation optimization with the survival analysis so that more useful information hidden in the large amount of data can be discovered. The developed methodology improves the modeling of matching and allocation system in terms of both interpretability and predictability. This will be beneficial to medical professionals at a great deal.Industrial Engineering & Managemen

    Clinical Decision Support Systems for Palliative Care Referral: Design and Evaluation of Frailty and Mortality Predictive Models

    Full text link
    [ES] Los Cuidados Paliativos (PC) son cuidados médicos especializados cuyo objetivo esmejorar la calidad de vida de los pacientes con enfermedades graves. Históricamente,se han aplicado a los pacientes en fase terminal, especialmente a los que tienen undiagnóstico oncológico. Sin embargo, los resultados de las investigaciones actualessugieren que la PC afecta positivamente a la calidad de vida de los pacientes condiferentes enfermedades. La tendencia actual sobre la PC es incluir a pacientes nooncológicos con afecciones como la EPOC, la insuficiencia de funciones orgánicas ola demencia. Sin embargo, la identificación de los pacientes con esas necesidades escompleja, por lo que se requieren herramientas alternativas basadas en datos clínicos. La creciente demanda de PC puede beneficiarse de una herramienta de cribadopara identificar a los pacientes con necesidades de PC durante el ingreso hospitalario.Se han propuesto varias herramientas, como la Pregunta Sorpresa (SQ) o la creaciónde diferentes índices y puntuaciones, con distintos grados de éxito. Recientemente,el uso de algoritmos de inteligencia artificial, en concreto de Machine Learning (ML), ha surgido como una solución potencial dada su capacidad de aprendizaje a partirde las Historias Clínicas Electrónicas (EHR) y con la expectativa de proporcionarpredicciones precisas para el ingreso en programas de PC. Esta tesis se centra en la creación de herramientas digitales basadas en ML para la identificación de pacientes con necesidades de cuidados paliativos en el momento del ingreso hospitalario. Hemos utilizado la mortalidad y la fragilidad como los dos criterios clínicos para la toma de decisiones, siendo la corta supervivencia y el aumento de la fragilidad, nuestros objetivos para hacer predicciones. También nos hemos centrado en la implementación de estas herramientas en entornos clínicos y en el estudio de su usabilidad y aceptación en los flujos de trabajo clínicos. Para lograr estos objetivos, en primer lugar, estudiamos y comparamos algoritmos de ML para la supervivencia a un año en pacientes adultos durante el ingreso hospitalario. Para ello, definimos una variable binaria a predecir, equivalente a la SQ y definimos el conjunto de variables predictivas basadas en la literatura. Comparamos modelos basados en Support Vector Machine (SVM), k-Nearest Neighbours (kNN), Random Forest (RF), Gradient Boosting Machine (GBM) y Multilayer Perceptron (MLP), atendiendo a su rendimiento, especialmente al Área bajo la curva ROC (AUC ROC). Además, obtuvimos información sobre la importancia de las variables para los modelos basados en árboles utilizando el criterio GINI. En segundo lugar, estudiamos la medición de la fragilidad de la calidad de vida(QoL) en los candidatos a la intervención en PC. Para este segundo estudio, redujimosla franja de edad de la población a pacientes ancianos (≥ 65 años) como grupo objetivo. A continuación, creamos tres modelos diferentes: 1) la adaptación del modelo demortalidad a un año para pacientes ancianos, 2) un modelo de regresión para estimarel número de días desde el ingreso hasta la muerte para complementar los resultadosdel primer modelo, y finalmente, 3) un modelo predictivo del estado de fragilidad aun año. Estos modelos se compartieron con la comunidad académica a través de unaaplicación web b que permite la entrada de datos y muestra la predicción de los tresmodelos y unos gráficos con la importancia de las variables. En tercer lugar, propusimos una versión del modelo de mortalidad a un año enforma de calculadora online. Esta versión se diseñó para maximizar el acceso de losprofesionales minimizando los requisitos de datos y haciendo que el software respondiera a las plataformas tecnológicas actuales. Así pues, se eliminaron las variablesadministrativas específicas de la fuente de datos y se trabajó en un proceso para minimizar las variables de entrada requeridas, manteniendo al mismo tiempo un ROCAUC elevado del modelo. Como resultado, e[CA] Les Cures Pal·liatives (PC) són cures mèdiques especialitzades l'objectiu de les qualsés millorar la qualitat de vida dels pacients amb malalties greus. Històricament, s'hanaplicat als pacients en fase terminal, especialment als quals tenen un diagnòstic oncològic. No obstant això, els resultats de les investigacions actuals suggereixen que lesPC afecten positivament a la qualitat de vida dels pacients amb diferents malalties. Latendència actual sobre les PC és incloure a pacients no oncològics amb afeccions comla malaltia pulmonar obstructiva crònica, la insuficiència de funcions orgàniques o lademència. No obstant això, la identificació dels pacients amb aqueixes necessitats éscomplexa, per la qual cosa es requereixen eines alternatives basades en dades clíniques. La creixent demanda de PC pot beneficiar-se d'una eina de garbellat per a identificar als pacients amb necessitats de PC durant l'ingrés hospitalari. S'han proposatdiverses eines, com la Pregunta Sorpresa (SQ) o la creació de diferents índexs i puntuacions, amb diferents graus d'èxit. Recentment, l'ús d'algorismes d'intel·ligènciaartificial, en concret de Machine Learning (ML), ha sorgit com una potencial soluciódonada la seua capacitat d'aprenentatge a partir de les Històries Clíniques Electròniques (EHR) i amb l'expectativa de proporcionar prediccions precises per a l'ingrés enprogrames de PC. Aquesta tesi se centra en la creació d'eines digitals basades en MLper a la identificació de pacients amb necessitats de cures pal·liatives durant l'ingréshospitalari. Hem utilitzat mortalitat i fragilitat com els dos criteris clínics per a lapresa de decisions, sent la curta supervivència i la major fragilitat els nostres objectiusa predir. Després, ens hem centrat en la seua implementació en entorns clínics i hemestudiat la seua usabilitat i acceptació en els fluxos de treball clínics.Aquesta tesi se centra en la creació d'eines digitals basades en ML per a la identificació de pacients amb necessitats de cures pal·liatives en el moment de l'ingrés hospitalari. Hem utilitzat la mortalitat i la fragilitat com els dos criteris clínics per ala presa de decisions, sent la curta supervivència i l'augment de la fragilitat, els nostresobjectius per a fer prediccions. També ens hem centrat en la implementació d'aquesteseines en entorns clínics i en l'estudi de la seua usabilitat i acceptació en els fluxos detreball clínics. Per a aconseguir aquests objectius, en primer lloc, estudiem i comparem algorismesde ML per a la supervivència a un any en pacients adults durant l'ingrés hospitalari.Per a això, definim una variable binària a predir, equivalent a la SQ i definim el conjuntde variables predictives basades en la literatura. Comparem models basats en Support Vector Machine (SVM), k-Nearest Neighbours (kNN), Random Forest (RF), Gradient Boosting Machine (GBM) i Multilayer Perceptron (MLP), atenent el seu rendiment,especialment a l'Àrea sota la corba ROC (AUC ROC). A més, vam obtindre informaciósobre la importància de les variables per als models basats en arbres utilitzant el criteri GINI. En segon lloc, estudiem el mesurament de la fragilitat de la qualitat de vida (QoL)en els candidats a la intervenció en PC. Per a aquest segon estudi, vam reduir lafranja d'edat de la població a pacients ancians (≥ 65 anys) com a grup objectiu. Acontinuació, creem tres models diferents: 1) l'adaptació del model de mortalitat a unany per a pacients ancians, 2) un model de regressió per a estimar el nombre de dies desde l'ingrés fins a la mort per a complementar els resultats del primer model, i finalment,3) un model predictiu de l'estat de fragilitat a un any. Aquests models es van compartiramb la comunitat acadèmica a través d'una aplicació web c que permet l'entrada dedades i mostra la predicció dels tres models i uns gràfics amb la importància de lesvariables. En tercer lloc, vam proposar una versió del model de mortalitat a un any en formade calculadora en línia. Aquesta versió es va di[EN] Palliative Care (PC) is specialized medical care that aims to improve patients' quality of life with serious illnesses. Historically, it has been applied to terminally ill patients, especially those with oncologic diagnoses. However, current research results suggest that PC positively affects the quality of life of patients with different conditions. The current trend on PC is to include non-oncological patients with conditions such as Chronic Obstructive Pulmonary Disease (COPD), organ function failure or dementia. However, the identification of patients with those needs is complex, and therefore alternative tools based on clinical data are required. The growing demand for PC may benefit from a screening tool to identify patients with PC needs during hospital admission. Several tools, such as the Surprise Question (SQ) or the creation of different indexes and scores, have been proposed with varying degrees of success. Recently, the use of artificial intelligence algorithms, specifically Machine Learning (ML), has arisen as a potential solution given their capacity to learn from the Electronic Health Records (EHRs) and with the expectation to provide accurate predictions for admission to PC programs. This thesis focuses on creating ML-based digital tools for identifying patients with palliative care needs at hospital admission. We have used mortality and frailty as the two clinical criteria for decision-making, being short survival and increased frailty, as our targets to make predictions. We also have focused on implementing these tools in clinical settings and studying their usability and acceptance in clinical workflows. To accomplish these objectives, first, we studied and compared ML algorithms for one-year survival in adult patients during hospital admission. To do so, we defined a binary variable to predict, equivalent to the SQ and defined the set of predictive variables based on literature. We compared models based on Support Vector Machine (SVM), k-Nearest Neighbours (kNN), Random Forest (RF), Gradient Boosting Machine (GBM) and Multilayer Perceptron (MLP), attending to their performance, especially to the Area under the ROC curve (AUC ROC). Additionally, we obtained information on the importance of variables for tree-based models using the GINI criterion. Second, we studied frailty measurement of Quality of Life (QoL) in candidates for PC intervention. For this second study, we narrowed the age of the population to elderly patients (≥ 65 years) as the target group. Then we created three different models: 1) for the adaptation of the one-year mortality model for elderly patients, 2) a regression model to estimate the number of days from admission to death to complement the results of the first model, and finally, 3) a predictive model for frailty status at one year. These models were shared with the academic community through a web application a that allows data input and shows the prediction from the three models and some graphs with the importance of the variables. Third, we proposed a version of the 1-year mortality model in the form of an online calculator. This version was designed to maximize access from professionals by minimizing data requirements and making the software responsive to the current technological platforms. So we eliminated the administrative variables specific to the dataset source and worked on a process to minimize the required input variables while maintaining high the model's AUC ROC. As a result, this model retained most of the predictive power and required only seven bed-side inputs. Finally, we evaluated the Clinical Decision Support System (CDSS) web tool on PC with an actual set of users. This evaluation comprised three domains: evaluation of participant's predictions against the ML baseline, the usability of the graphical interface, and user experience measurement. A first evaluation was performed, followed by a period of implementation of improvements and corrections to the plaBlanes Selva, V. (2022). Clinical Decision Support Systems for Palliative Care Referral: Design and Evaluation of Frailty and Mortality Predictive Models [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/19099

    Persistence in complex systems

    Get PDF
    Persistence is an important characteristic of many complex systems in nature, related to how long the system remains at a certain state before changing to a different one. The study of complex systems’ persistence involves different definitions and uses different techniques, depending on whether short-term or long-term persistence is considered. In this paper we discuss the most important definitions, concepts, methods, literature and latest results on persistence in complex systems. Firstly, the most used definitions of persistence in short-term and long-term cases are presented. The most relevant methods to characterize persistence are then discussed in both cases. A complete literature review is also carried out. We also present and discuss some relevant results on persistence, and give empirical evidence of performance in different detailed case studies, for both short-term and long-term persistence. A perspective on the future of persistence concludes the work.This research has been partially supported by the project PID2020-115454GB-C21 of the Spanish Ministry of Science and Innovation (MICINN). This research has also been partially supported by Comunidad de Madrid, PROMINT-CM project (grant ref: P2018/EMT-4366). J. Del Ser would like to thank the Basque Government for its funding support through the EMAITEK and ELKARTEK programs (3KIA project, KK-2020/00049), as well as the consolidated research group MATHMODE (ref. T1294-19). GCV work is supported by the European Research Council (ERC) under the ERC-CoG-2014 SEDAL Consolidator grant (grant agreement 647423)

    Persistence in complex systems

    Get PDF
    Persistence is an important characteristic of many complex systems in nature, related to how long the system remains at a certain state before changing to a different one. The study of complex systems' persistence involves different definitions and uses different techniques, depending on whether short-term or long-term persistence is considered. In this paper we discuss the most important definitions, concepts, methods, literature and latest results on persistence in complex systems. Firstly, the most used definitions of persistence in short-term and long-term cases are presented. The most relevant methods to characterize persistence are then discussed in both cases. A complete literature review is also carried out. We also present and discuss some relevant results on persistence, and give empirical evidence of performance in different detailed case studies, for both short-term and long-term persistence. A perspective on the future of persistence concludes the work.This research has been partially supported by the project PID2020-115454GB-C21 of the Spanish Ministry of Science and Innovation (MICINN). This research has also been partially supported by Comunidad de Madrid, PROMINT-CM project (grant ref: P2018/EMT-4366). J. Del Ser would like to thank the Basque Government for its funding support through the EMAITEK and ELKARTEK programs (3KIA project, KK-2020/00049), as well as the consolidated research group MATHMODE (ref. T1294-19). GCV work is supported by the European Research Council (ERC) under the ERC-CoG-2014 SEDAL Consolidator grant (grant agreement 647423)

    Vincristine-Induced Peripheral Neuropathy: Assessing Preventable Strategies in Paediatric Acute Lymphoblastic Leukaemia

    Full text link
    Background: Acute Lymphoblastic Leukaemia is the most common cancer experienced by children with overall survival rates now exceeding 90%. However, most children will experience vincristine-induced peripheral neuropathy (VIPN) during treatment resulting in sensory-motor abnormalities. To date, there are no approved preventative therapeutics or mitigation strategies for VIPN. This body of work set out to: (1) establish a high-throughput and high-content assay with the capacity to identify neuroprotective compounds, (2) test the feasibility of repurposing olesoxime as a neuroprotectant, and (3) compare traditional statistical methods with machine learning models to identify patients at risk of VIPN. Methods: (1) In vitro neuronal cultures were exposed to vincristine to recapitulate the VIPN phenotype and olesoxime assessed as a positive control. The neurotoxicity assay was miniaturised in 384-well microplates with automation steps to reduce manual handling. (2) Olesoxime and vincristine were applied to proliferating malignant cell lines to ensure the efficacy of vincristine was maintained. (3) Machine learning algorithms were developed using data from a local retrospective cohort to predict VIPN. Results: (1) Neurite length was reduced in a dose-responsive manner with vincristine. Assay miniaturisation and automation steps helped facilitate a high-throughput workflow. An optimised multiplexed dye solution enabled image acquisition and neurite quantification. Further, olesoxime was found to protect neurites and deemed suitable as a positive control (2) Cell viability assays confirmed olesoxime did not interfere with vincristine efficacy in leukemia cells. (3) Machine learning algorithms showed equivalency to traditional univariate analysis. The observation of severe class imbalance meant that patients who were least susceptible to VIPN could be identified. Conclusions: This body of work demonstrates the successful development of a neurotoxicity assay suitable for neuroprotectant drug discovery. Olesoxime was found suitable as a positive control in the assay. Further, viability studies indicated that vincristine retains it efficacy with olesoxime, opening the possibility of its use as an adjunctive therapy. Finally, this work developed machine learning models with the capacity to identify patients with VIPN-free survival. The utility of this model may mean that it can be used to stratify patients prospectively in the clinic based on favourable clinical features
    corecore