551 research outputs found

    Artificial Intelligence and Liver Transplant:Predicting Survival of Individual Grafts

    Get PDF
    The demand for liver transplantation far outstrips the supply of deceased donor organs, and so, listing and allocation decisions aim to maximize utility. Most existing methods for predicting transplant outcomes use basic methods, such as regression modeling, but newer artificial intelligence (AI) techniques have the potential to improve predictive accuracy. The aim was to perform a systematic review of studies predicting graft outcomes following deceased donor liver transplantation using AI techniques and to compare these findings to linear regression and standard predictive modeling: donor risk index (DRI), Model for End‐Stage Liver Disease (MELD), and Survival Outcome Following Liver Transplantation (SOFT). After reviewing available article databases, a total of 52 articles were reviewed for inclusion. Of these articles, 9 met the inclusion criteria, which reported outcomes from 18,771 liver transplants. Artificial neural networks (ANNs) were the most commonly used methodology, being reported in 7 studies. Only 2 studies directly compared machine learning (ML) techniques to liver scoring modalities (i.e., DRI, SOFT, and balance of risk [BAR]). Both studies showed better prediction of individual organ survival with the optimal ANN model, reporting an area under the receiver operating characteristic curve (AUROC) 0.82 compared with BAR (0.62) and SOFT (0.57), and the other ANN model gave an AUC ROC of 0.84 compared with a DRI (0.68) and SOFT (0.64). AI techniques can provide high accuracy in predicting graft survival based on donors and recipient variables. When compared with the standard techniques, AI methods are dynamic and are able to be trained and validated within every population. However, the high accuracy of AI may come at a cost of losing explainability (to patients and clinicians) on how the technology works

    Synthetic semi-supervised learning in imbalanced domains: Constructing a model for donor-recipient matching in liver transplantation

    Get PDF
    Liver transplantation is a promising and widely-accepted treatment for patients with terminal liver disease. However, transplantation is restricted by the lack of suitable donors, resulting in significant waiting list deaths. This paper proposes a novel donor-recipient allocation system that uses machine learning to predict graft survival after transplantation using a dataset comprised of donor-recipient pairs from the King’s College Hospital (United Kingdom). The main novelty of the system is that it tackles the imbalanced nature of the dataset by considering semi-supervised learning, analysing its potential for obtaining more robust and equitable models in liver transplantation. We propose two different sources of unsupervised data for this specific problem (recent transplants and virtual donor-recipient pairs) and two methods for using these data during model construction (a semi-supervised algorithm and a label propagation scheme). The virtual pairs and the label propagation method are shown to alleviate the imbalanced distribution. The results of our experiments show that the use of synthetic and real unsupervised information helps to improve and stabilise the performance of the model and leads to fairer decisions with respect to the use of only supervised data. Moreover, the best model is combined with the Model for End-stage Liver Disease score (MELD), which is at the moment the most popular assignation methodology worldwide. By doing this, our decision-support system considers both the compatibility of the donor and the recipient (by our prediction system) and the recipient severity (via the MELD score), supporting then the principles of fairness and benefit

    Dynamically weighted evolutionary ordinal neural network for solving an imbalanced liver transplantation problem

    Get PDF
    Objective Create an efficient decision-support model to assist medical experts in the process of organ allocation in liver transplantation. The mathematical model proposed here uses different sources of information to predict the probability of organ survival at different thresholds for each donor–recipient pair considered. Currently, this decision is mainly based on the Model for End-stage Liver Disease, which depends only on the severity of the recipient and obviates donor–recipient compatibility. We therefore propose to use information concerning the donor, the recipient and the surgery, with the objective of allocating the organ correctly. Methods and materials The database consists of information concerning transplants conducted in 7 different Spanish hospitals and the King's College Hospital (United Kingdom). The state of the patients is followed up for 12 months. We propose to treat the problem as an ordinal classification one, where we predict the organ survival at different thresholds: less than 15 days, between 15 and 90 days, between 90 and 365 days and more than 365 days. This discretization is intended to produce finer-grain survival information (compared with the common binary approach). However, it results in a highly imbalanced dataset in which more than 85% of cases belong to the last class. To solve this, we combine two approaches, a cost-sensitive evolutionary ordinal artificial neural network (ANN) (in which we propose to incorporate dynamic weights to make more emphasis on the worst classified classes) and an ordinal over-sampling technique (which adds virtual patterns to the minority classes and thus alleviates the imbalanced nature of the dataset). Results The results obtained by our proposal are promising and satisfactory, considering the overall accuracy, the ordering of the classes and the sensitivity of minority classes. In this sense, both the dynamic costs and the over-sampling technique improve the base results of the considered ANN-based method. Comparing our model with other state-of-the-art techniques in ordinal classification, competitive results can also be appreciated. The results achieved with this proposal improve the ones obtained by other state-of-the-art models: we were able to correctly predict more than 73% of the transplantation results, with a geometric mean of the sensitivities of 31.46%, which is much higher than the one obtained by other models. Conclusions The combination of the proposed cost-sensitive evolutionary algorithm together with the application of an over-sampling technique improves the predictive capability of our model in a significant way (especially for minority classes), which can help the surgeons make more informed decisions about the most appropriate recipient for an specific donor organ, in order to maximize the probability of survival after the transplantation and therefore the fairness principle

    Validación de las redes neuronales artificiales como metodología para la asignación donante-receptor en el trasplante hepático

    Get PDF
    1. Introducción o motivación de la tesis. El trasplante hepático constituye la mejor opción terapéutica para un gran número de patologías hepáticas en fase terminal. Desafortunadamente, existe un disbalance entre el número de candidatos y el número de donantes disponibles, lo que conlleva a muertes y exclusiones en lista de espera. En los últimos años se han realizado numerosos esfuerzos para incrementar el pool de donantes, así como para optimizar la priorización en lista de los posibles receptores. Entre ellos, destacan la utilización de los denominados “donantes con criterios extendidos” (ECD, extended criteria donors) y la adopción de un sistema de priorización mediante un score basado en la gravedad del candidato (MELD, Mayo Model for End Stage Liver Disease). La asignación donante-receptor es un factor determinante en los resultados del trasplante hepático, para lo cual se han propuesto múltiples “scores” en la literatura. Sin embargo, ninguno de ellos se considera óptimo para realizar este emparejamiento. En 2014, nuestro grupo publicó la utilidad de las redes neuronales artificiales (RNA) como una herramienta óptima para el matching donante-receptor en el trasplante hepático. Para ello se realizó un estudio multicéntrico a nivel nacional, en el que se demostró la superioridad de este modelo para predecir la supervivencia post-trasplante. El objetivo de nuestro estudio es analizar si las redes neuronales tienen un comportamiento similar al demostrado en España en un sistema de salud diferente, y si son una herramienta superior a los modelos actuales utilizados para el matching donante-receptor. 2. Contenido de la investigación. Se recogieron 822 pares donante-receptor (D-R) de trasplantes hepáticos realizados de forma consecutiva en el hospital King’s College de Londres durante los años 2002 a 2010, teniendo en cuenta variables del donante, del receptor y del trasplante. Para cada par, se calcularon dos probabilidades: la probabilidad de supervivencia (CCR) y la probabilidad de pérdida del injerto (MS) a los 3 meses del trasplante. Para ello se construyeron dos modelos de redes neuronales artificiales diferentes y no complementarios: el modelo de aceptación y el modelo de rechazo. Se construyeron varios modelos: 1) Entrenamiento y generalización con los pares D-R del hospital británico (a 3 y a 12 meses post-trasplante) , 2) Entrenamiento con pares D-R españoles y generalización con los británicos y 3) Modelo combinado: entrena y generaliza con pares españoles y británicos. Además, para ayudar en la toma de decisiones según los resultados obtenidos por la red neuronal, se construyó un sistema basado en reglas. Los modelos diseñados para el hospital King’s College demostraron una excelente capacidad de predicción para ambos: 3 meses (CCR-AUC=0,9375; MS-AUC=0,9374) y 12 meses (CCR-AUC=0,7833; MS-AUC=0,8153), casi un 15% superior a la mejor capacidad de predicción obtenida por otros scores como MELD o BAR (Balance of Risk). Además, estos resultados mejoran los publicados previamente en el modelo multicéntrico español. Sin embargo, esta capacidad de predicción no es tan buena cuando el modelo entrena y generaliza con pares D-R procedentes de sistemas de salud diferentes, ni tampoco en el modelo combinado. 3.Conclusiones. 1. El empleo de Redes Neuronales Artificiales para la Asignación Donante-Receptor en el Trasplante Hepático ha demostrado excelentes capacidades de predicción de Supervivencia y No Supervivencia del injerto, al ser validadas en un sistema de salud distinto de otro país, por lo tanto la metodología de la Inteligencia Artificial ha quedado claramente validada como herramienta óptima para el “matching D-R”. 2. Nuestros resultados apoyan que los distintos equipos de Trasplante Hepático consideren las Redes Neuronales Artificiales como el método más exhaustivo y objetivo descrito hasta la fecha para el manejo de la lista de espera del Trasplante Hepático, evitando criterios subjetivos y arbitrarios y maximizando los principios de equidad, utilidad y eficiencia. 3. Nuestro modelo de validación, es decir, la RNA generada con pares D-R del Hospital King’s College de Londres ha logrado la máxima capacidad de predicción, superando el resto de modelos y apoyando el hecho de que cada RNA debe ser entrenada, testada y optimizada para un propósito específico, en una única población. Así, cada programa de TH debería disponer de su propio modelo construido con sus propios datos, para apoyar la decisión del “matching D-R”. 4. El modelo de Asignación D-R generado por las RNAs combina lo mejor del sistema MELD con el Beneficio de Supervivencia Global, usando para ello un sistema basado en reglas, maximizando la utilidad de los injertos disponibles. Esto los convierte en sistemas complementarios para un mismo fin, en lugar de considerarlos competitivos

    Agrupamiento, predicción y clasificación ordinal para series temporales utilizando técnicas de machine learning: aplicaciones

    Get PDF
    In the last years, there has been an increase in the number of fields improving their standard processes by using machine learning (ML) techniques. The main reason for this is that the vast amount of data generated by these processes is difficult to be processed by humans. Therefore, the development of automatic methods to process and extract relevant information from these data processes is of great necessity, giving that these approaches could lead to an increase in the economic benefit of enterprises or to a reduction in the workload of some current employments. Concretely, in this Thesis, ML approaches are applied to problems concerning time series data. Time series is a special kind of data in which data points are collected chronologically. Time series are present in a wide variety of fields, such as atmospheric events or engineering applications. Besides, according to the main objective to be satisfied, there are different tasks in the literature applied to time series. Some of them are those on which this Thesis is mainly focused: clustering, classification, prediction and, in general, analysis. Generally, the amount of data to be processed is huge, arising the need of methods able to reduce the dimensionality of time series without decreasing the amount of information. In this sense, the application of time series segmentation procedures dividing the time series into different subsequences is a good option, given that each segment defines a specific behaviour. Once the different segments are obtained, the use of statistical features to characterise them is an excellent way to maximise the information of the time series and simultaneously reducing considerably their dimensionality. In the case of time series clustering, the objective is to find groups of similar time series with the idea of discovering interesting patterns in time series datasets. In this Thesis, we have developed a novel time series clustering technique. The aim of this proposal is twofold: to reduce as much as possible the dimensionality and to develop a time series clustering approach able to outperform current state-of-the-art techniques. In this sense, for the first objective, the time series are segmented in order to divide the them identifying different behaviours. Then, these segments are projected into a vector of statistical features aiming to reduce the dimensionality of the time series. Once this preprocessing step is done, the clustering of the time series is carried out, with a significantly lower computational load. This novel approach has been tested on all the time series datasets available in the University of East Anglia and University of California Riverside (UEA/UCR) time series classification (TSC) repository. Regarding time series classification, two main paths could be differentiated: firstly, nominal TSC, which is a well-known field involving a wide variety of proposals and transformations applied to time series. Concretely, one of the most popular transformation is the shapelet transform (ST), which has been widely used in this field. The original method extracts shapelets from the original time series and uses them for classification purposes. Nevertheless, the full enumeration of all possible shapelets is very time consuming. Therefore, in this Thesis, we have developed a hybrid method that starts with the best shapelets extracted by using the original approach with a time constraint and then tunes these shapelets by using a convolutional neural network (CNN) model. Secondly, time series ordinal classification (TSOC) is an unexplored field beginning with this Thesis. In this way, we have adapted the original ST to the ordinal classification (OC) paradigm by proposing several shapelet quality measures taking advantage of the ordinal information of the time series. This methodology leads to better results than the state-of-the-art TSC techniques for those ordinal time series datasets. All these proposals have been tested on all the time series datasets available in the UEA/UCR TSC repository. With respect to time series prediction, it is based on estimating the next value or values of the time series by considering the previous ones. In this Thesis, several different approaches have been considered depending on the problem to be solved. Firstly, the prediction of low-visibility events produced by fog conditions is carried out by means of hybrid autoregressive models (ARs) combining fixed-size and dynamic windows, adapting itself to the dynamics of the time series. Secondly, the prediction of convective cloud formation (which is a highly imbalance problem given that the number of convective cloud events is much lower than that of non-convective situations) is performed in two completely different ways: 1) tackling the problem as a multi-objective classification task by the use of multi-objective evolutionary artificial neural networks (MOEANNs), in which the two conflictive objectives are accuracy of the minority class and the global accuracy, and 2) tackling the problem from the OC point of view, in which, in order to reduce the imbalance degree, an oversampling approach is proposed along with the use of OC techniques. Thirdly, the prediction of solar radiation is carried out by means of evolutionary artificial neural networks (EANNs) with different combinations of basis functions in the hidden and output layers. Finally, the last challenging problem is the prediction of energy flux from waves and tides. For this, a multitask EANN has been proposed aiming to predict the energy flux at several prediction time horizons (from 6h to 48h). All these proposals and techniques have been corroborated and discussed according to physical and atmospheric models. The work developed in this Thesis is supported by 11 JCR-indexed papers in international journals (7 Q1, 3 Q2, 1 Q3), 11 papers in international conferences, and 4 papers in national conferences

    Predicción ordinal utilizando metodologías de aprendizaje automático: Aplicaciones

    Get PDF
    Artificial Intelligence is part of our everyday life, not only as consumers but also in most of the productive areas since companies can optimize most of their processes with all the different tools that it can provide. There is one topic that has been especially useful in the artificial intelligence implementation process which is machine learning, as it can be used in most of the practical applications that appear in real-life problems. Machine learning is the part of artificial intelligence that focuses on developing models that are able to learn a function that transforms input data into a desired output. One of the most important parts in machine learning is the model, and one of the most successful models in the state-of-the-art approaches is the artificial neural network. This is why the current thesis, for its first challenge, will study how to improve them to be able to learn more complex problems without needing to apply computationally costly training algorithms. The next important step to improve the model’s performance is to optimize the algorithms that are used to let them learn how to transform the inputs into the desired outputs, and the second challenge of this thesis is to optimize the computational cost of evolutionary algorithms, which are one of the best options to optimize ANNs due to their flexibility when training them. Ordinal classification (also known as ordinal regression) is an area of machine learning that can be applied to many real-life problems since it takes into account the order of the classes, which is an important fact in many real-life problems. In the area of social sciences, we will study how potential countries are helping the poorer ones the most, and then we will perform a deeper study to classify the level of globalisation of a country. These studies will be performed by applying the models and algorithms that were developed in the first stage of the thesis. After these first works, continuing with the ordinal classification approaches, we focused on the area of medicine, where there are many examples of applications of these techniques, e.g., any disease that may have progression is usually classified in different stages depending on its severity from low to high. In our case, this thesis will study how a treatment (liver transplantation) can affect different patients (survival time of the graft), and therefore decide which patient is the most appropriate for that specific treatment. The last chapter of the thesis will delve in ordinal classification to achieve ordinal prediction of time series. Time series have been usually processed with classical statistical techniques since machine learning models that focused on time series were too costly. However, currently, with the arrival of powerful computation machines together with the evolution of models such as recurrent neural networks, classic statistical techniques can hardly be competitive versus machine learning. In areas such as economics, social sciences, meteorology or medicine, time series are the main source of information, and they need to be correctly processed to be useful. The most common consideration when dealing with time series is to learn from past values to predict future ones, and the works in this last chapter will focus on performing ordinal predictions of WPREs in wind farms, creating novel models and methodologies. The thesis will conclude with a work that implements a deep neural network to predict WPREs in multiple wind farms at the same time; therefore, this model would allow predicting WPREs in a global area instead of in a specific geographical point

    Multi objective genetic algorithm for training three term backpropagation network

    Get PDF
    Multi Objective Evolutionary Algorithms has been applied for learning problem in Artificial Neural Networks to improve the generalization of the training and testing unseen data.This paper proposes the simultaneous optimization method for training Three Term Back Propagation Network (TTBPN) learning using Multi Objective Genetic Algorithm.The Non-dominated Sorting Genetic Algorithm II is applied to optimize the TTBPN structure by simultaneously reducing the error and complexity in terms of number of hidden nodes of the network for better accuracy in classification problem.This methodology is applied in two kinds of multiclasses data set obtained from the University of California at Irvine repository.The results obtained for training and testing on the datasets illustrate less network error and better classification accuracy, besides having simple architecture for the TTBPN

    Heart Transplantation

    Get PDF
    corecore