46 research outputs found

    Dynamically weighted evolutionary ordinal neural network for solving an imbalanced liver transplantation problem

    Get PDF
    Objective Create an efficient decision-support model to assist medical experts in the process of organ allocation in liver transplantation. The mathematical model proposed here uses different sources of information to predict the probability of organ survival at different thresholds for each donor–recipient pair considered. Currently, this decision is mainly based on the Model for End-stage Liver Disease, which depends only on the severity of the recipient and obviates donor–recipient compatibility. We therefore propose to use information concerning the donor, the recipient and the surgery, with the objective of allocating the organ correctly. Methods and materials The database consists of information concerning transplants conducted in 7 different Spanish hospitals and the King's College Hospital (United Kingdom). The state of the patients is followed up for 12 months. We propose to treat the problem as an ordinal classification one, where we predict the organ survival at different thresholds: less than 15 days, between 15 and 90 days, between 90 and 365 days and more than 365 days. This discretization is intended to produce finer-grain survival information (compared with the common binary approach). However, it results in a highly imbalanced dataset in which more than 85% of cases belong to the last class. To solve this, we combine two approaches, a cost-sensitive evolutionary ordinal artificial neural network (ANN) (in which we propose to incorporate dynamic weights to make more emphasis on the worst classified classes) and an ordinal over-sampling technique (which adds virtual patterns to the minority classes and thus alleviates the imbalanced nature of the dataset). Results The results obtained by our proposal are promising and satisfactory, considering the overall accuracy, the ordering of the classes and the sensitivity of minority classes. In this sense, both the dynamic costs and the over-sampling technique improve the base results of the considered ANN-based method. Comparing our model with other state-of-the-art techniques in ordinal classification, competitive results can also be appreciated. The results achieved with this proposal improve the ones obtained by other state-of-the-art models: we were able to correctly predict more than 73% of the transplantation results, with a geometric mean of the sensitivities of 31.46%, which is much higher than the one obtained by other models. Conclusions The combination of the proposed cost-sensitive evolutionary algorithm together with the application of an over-sampling technique improves the predictive capability of our model in a significant way (especially for minority classes), which can help the surgeons make more informed decisions about the most appropriate recipient for an specific donor organ, in order to maximize the probability of survival after the transplantation and therefore the fairness principle

    Predicción ordinal utilizando metodologías de aprendizaje automático: Aplicaciones

    Get PDF
    Artificial Intelligence is part of our everyday life, not only as consumers but also in most of the productive areas since companies can optimize most of their processes with all the different tools that it can provide. There is one topic that has been especially useful in the artificial intelligence implementation process which is machine learning, as it can be used in most of the practical applications that appear in real-life problems. Machine learning is the part of artificial intelligence that focuses on developing models that are able to learn a function that transforms input data into a desired output. One of the most important parts in machine learning is the model, and one of the most successful models in the state-of-the-art approaches is the artificial neural network. This is why the current thesis, for its first challenge, will study how to improve them to be able to learn more complex problems without needing to apply computationally costly training algorithms. The next important step to improve the model’s performance is to optimize the algorithms that are used to let them learn how to transform the inputs into the desired outputs, and the second challenge of this thesis is to optimize the computational cost of evolutionary algorithms, which are one of the best options to optimize ANNs due to their flexibility when training them. Ordinal classification (also known as ordinal regression) is an area of machine learning that can be applied to many real-life problems since it takes into account the order of the classes, which is an important fact in many real-life problems. In the area of social sciences, we will study how potential countries are helping the poorer ones the most, and then we will perform a deeper study to classify the level of globalisation of a country. These studies will be performed by applying the models and algorithms that were developed in the first stage of the thesis. After these first works, continuing with the ordinal classification approaches, we focused on the area of medicine, where there are many examples of applications of these techniques, e.g., any disease that may have progression is usually classified in different stages depending on its severity from low to high. In our case, this thesis will study how a treatment (liver transplantation) can affect different patients (survival time of the graft), and therefore decide which patient is the most appropriate for that specific treatment. The last chapter of the thesis will delve in ordinal classification to achieve ordinal prediction of time series. Time series have been usually processed with classical statistical techniques since machine learning models that focused on time series were too costly. However, currently, with the arrival of powerful computation machines together with the evolution of models such as recurrent neural networks, classic statistical techniques can hardly be competitive versus machine learning. In areas such as economics, social sciences, meteorology or medicine, time series are the main source of information, and they need to be correctly processed to be useful. The most common consideration when dealing with time series is to learn from past values to predict future ones, and the works in this last chapter will focus on performing ordinal predictions of WPREs in wind farms, creating novel models and methodologies. The thesis will conclude with a work that implements a deep neural network to predict WPREs in multiple wind farms at the same time; therefore, this model would allow predicting WPREs in a global area instead of in a specific geographical point

    Artificial Intelligence and Liver Transplant:Predicting Survival of Individual Grafts

    Get PDF
    The demand for liver transplantation far outstrips the supply of deceased donor organs, and so, listing and allocation decisions aim to maximize utility. Most existing methods for predicting transplant outcomes use basic methods, such as regression modeling, but newer artificial intelligence (AI) techniques have the potential to improve predictive accuracy. The aim was to perform a systematic review of studies predicting graft outcomes following deceased donor liver transplantation using AI techniques and to compare these findings to linear regression and standard predictive modeling: donor risk index (DRI), Model for End‐Stage Liver Disease (MELD), and Survival Outcome Following Liver Transplantation (SOFT). After reviewing available article databases, a total of 52 articles were reviewed for inclusion. Of these articles, 9 met the inclusion criteria, which reported outcomes from 18,771 liver transplants. Artificial neural networks (ANNs) were the most commonly used methodology, being reported in 7 studies. Only 2 studies directly compared machine learning (ML) techniques to liver scoring modalities (i.e., DRI, SOFT, and balance of risk [BAR]). Both studies showed better prediction of individual organ survival with the optimal ANN model, reporting an area under the receiver operating characteristic curve (AUROC) 0.82 compared with BAR (0.62) and SOFT (0.57), and the other ANN model gave an AUC ROC of 0.84 compared with a DRI (0.68) and SOFT (0.64). AI techniques can provide high accuracy in predicting graft survival based on donors and recipient variables. When compared with the standard techniques, AI methods are dynamic and are able to be trained and validated within every population. However, the high accuracy of AI may come at a cost of losing explainability (to patients and clinicians) on how the technology works

    An ordinal CNN approach for the assessment of neurological damage in Parkinson’s disease patients

    Get PDF
    3D image scans are an assessment tool for neurological damage in Parkinson’s disease (PD) patients. This diagnosis process can be automatized to help medical staff through Decision Support Systems (DSSs), and Convolutional Neural Networks (CNNs) are good candidates, because they are effective when applied to spatial data. This paper proposes a 3D CNN ordinal model for assessing the level or neurological damage in PD patients. Given that CNNs need large datasets to achieve acceptable performance, a data augmentation method is adapted to work with spatial data. We consider the Ordinal Graph-based Oversampling via Shortest Paths (OGO-SP) method, which applies a gamma probability distribution for inter-class data generation. A modification of OGO-SP is proposed, the OGO-SP- algorithm, which applies the beta distribution for generating synthetic samples in the inter-class region, a better suited distribution when compared to gamma. The evaluation of the different methods is based on a novel 3D image dataset provided by the Hospital Universitario ‘Reina Sofía’ (Córdoba, Spain). We show how the ordinal methodology improves the performance with respect to the nominal one, and how OGO-SP- yields better performance than OGO-SP

    Validación de las redes neuronales artificiales como metodología para la asignación donante-receptor en el trasplante hepático

    Get PDF
    1. Introducción o motivación de la tesis. El trasplante hepático constituye la mejor opción terapéutica para un gran número de patologías hepáticas en fase terminal. Desafortunadamente, existe un disbalance entre el número de candidatos y el número de donantes disponibles, lo que conlleva a muertes y exclusiones en lista de espera. En los últimos años se han realizado numerosos esfuerzos para incrementar el pool de donantes, así como para optimizar la priorización en lista de los posibles receptores. Entre ellos, destacan la utilización de los denominados “donantes con criterios extendidos” (ECD, extended criteria donors) y la adopción de un sistema de priorización mediante un score basado en la gravedad del candidato (MELD, Mayo Model for End Stage Liver Disease). La asignación donante-receptor es un factor determinante en los resultados del trasplante hepático, para lo cual se han propuesto múltiples “scores” en la literatura. Sin embargo, ninguno de ellos se considera óptimo para realizar este emparejamiento. En 2014, nuestro grupo publicó la utilidad de las redes neuronales artificiales (RNA) como una herramienta óptima para el matching donante-receptor en el trasplante hepático. Para ello se realizó un estudio multicéntrico a nivel nacional, en el que se demostró la superioridad de este modelo para predecir la supervivencia post-trasplante. El objetivo de nuestro estudio es analizar si las redes neuronales tienen un comportamiento similar al demostrado en España en un sistema de salud diferente, y si son una herramienta superior a los modelos actuales utilizados para el matching donante-receptor. 2. Contenido de la investigación. Se recogieron 822 pares donante-receptor (D-R) de trasplantes hepáticos realizados de forma consecutiva en el hospital King’s College de Londres durante los años 2002 a 2010, teniendo en cuenta variables del donante, del receptor y del trasplante. Para cada par, se calcularon dos probabilidades: la probabilidad de supervivencia (CCR) y la probabilidad de pérdida del injerto (MS) a los 3 meses del trasplante. Para ello se construyeron dos modelos de redes neuronales artificiales diferentes y no complementarios: el modelo de aceptación y el modelo de rechazo. Se construyeron varios modelos: 1) Entrenamiento y generalización con los pares D-R del hospital británico (a 3 y a 12 meses post-trasplante) , 2) Entrenamiento con pares D-R españoles y generalización con los británicos y 3) Modelo combinado: entrena y generaliza con pares españoles y británicos. Además, para ayudar en la toma de decisiones según los resultados obtenidos por la red neuronal, se construyó un sistema basado en reglas. Los modelos diseñados para el hospital King’s College demostraron una excelente capacidad de predicción para ambos: 3 meses (CCR-AUC=0,9375; MS-AUC=0,9374) y 12 meses (CCR-AUC=0,7833; MS-AUC=0,8153), casi un 15% superior a la mejor capacidad de predicción obtenida por otros scores como MELD o BAR (Balance of Risk). Además, estos resultados mejoran los publicados previamente en el modelo multicéntrico español. Sin embargo, esta capacidad de predicción no es tan buena cuando el modelo entrena y generaliza con pares D-R procedentes de sistemas de salud diferentes, ni tampoco en el modelo combinado. 3.Conclusiones. 1. El empleo de Redes Neuronales Artificiales para la Asignación Donante-Receptor en el Trasplante Hepático ha demostrado excelentes capacidades de predicción de Supervivencia y No Supervivencia del injerto, al ser validadas en un sistema de salud distinto de otro país, por lo tanto la metodología de la Inteligencia Artificial ha quedado claramente validada como herramienta óptima para el “matching D-R”. 2. Nuestros resultados apoyan que los distintos equipos de Trasplante Hepático consideren las Redes Neuronales Artificiales como el método más exhaustivo y objetivo descrito hasta la fecha para el manejo de la lista de espera del Trasplante Hepático, evitando criterios subjetivos y arbitrarios y maximizando los principios de equidad, utilidad y eficiencia. 3. Nuestro modelo de validación, es decir, la RNA generada con pares D-R del Hospital King’s College de Londres ha logrado la máxima capacidad de predicción, superando el resto de modelos y apoyando el hecho de que cada RNA debe ser entrenada, testada y optimizada para un propósito específico, en una única población. Así, cada programa de TH debería disponer de su propio modelo construido con sus propios datos, para apoyar la decisión del “matching D-R”. 4. El modelo de Asignación D-R generado por las RNAs combina lo mejor del sistema MELD con el Beneficio de Supervivencia Global, usando para ello un sistema basado en reglas, maximizando la utilidad de los injertos disponibles. Esto los convierte en sistemas complementarios para un mismo fin, en lugar de considerarlos competitivos

    Error-Correcting Output Codes in the Framework of Deep Ordinal Classification

    Get PDF
    Automatic classification tasks on structured data have been revolutionized by Convolutional Neural Networks (CNNs), but the focus has been on binary and nominal classification tasks. Only recently, ordinal classification (where class labels present a natural ordering) has been tackled through the framework of CNNs. Also, ordinal classification datasets commonly present a high imbalance in the number of samples of each class, making it an even harder problem. Focus should be shifted from classic classification metrics towards per-class metrics (like AUC or Sensitivity) and rank agreement metrics (like Cohen’s Kappa or Spearman’s rank correlation coefficient). We present a new CNN architecture based on the Ordinal Binary Decomposition (OBD) technique using Error-Correcting Output Codes (ECOC). We aim to show experimentally, using four different CNN architectures and two ordinal classification datasets, that the OBD+ECOC methodology significantly improves the mean results on the relevant ordinal and class-balancing metrics. The proposed method is able to outperform a nominal approach as well as already existing ordinal approaches, achieving a mean performance of RMSE=1.0797 for the Retinopathy dataset and RMSE=1.1237 for the Adience dataset averaged over 4 different architectures

    Predicting graft failure in pediatric liver transplantation based on early biomarkers using machine learning models

    Get PDF
    The early detection of graft failure in pediatric liver transplantation is crucial for appropriate intervention. Graft failure is associated with numerous perioperative risk factors. This study aimed to develop an individualized predictive model for 90-days graft failure in pediatric liver transplantation using machine learning methods. We conducted a single-center retrospective cohort study. A total of 87 liver transplantation cases performed in patients aged < 12 years at the Severance Hospital between January 2010 and September 2020 were included as data samples. Preoperative conditions of recipients and donors, intraoperative care, postoperative serial laboratory parameters, and events observed within seven days of surgery were collected as features. A least absolute shrinkage and selection operator (LASSO) -based method was used for feature selection to overcome the high dimensionality and collinearity of variables. Among 146 features, four variables were selected as the resultant features, namely, preoperative hepatic encephalopathy, sodium level at the end of surgery, hepatic artery thrombosis, and total bilirubin level on postoperative day 7. These features were selected from different times and represent distinct clinical aspects. The model with logistic regression demonstrated the best prediction performance among various machine learning methods tested (area under the receiver operating characteristic curve (AUROC) = 0.898 and area under the precision-recall curve (AUPR) = 0.882). The risk scoring system developed based on the logistic regression model showed an AUROC of 0.910 and an AUPR of 0.830. Together, the prediction of graft failure in pediatric liver transplantation using the proposed machine learning model exhibited superior discrimination power and, therefore, can provide valuable information to clinicians for their decision making during the postoperative management of the patients.ope

    “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocations

    Get PDF
    Funding for open access publishing: Universidad de Granada/ CBUA. This research is funded by the project “Detección y eliminación de sesgos en algoritmos de triaje y localización para la COVID-19” of the call Ayudas Fundación BBVA a Equipos de Investigación Científica SARS-CoV-2 y COVID-19, en el área de Humanidades. JR also thanks a La Caixa Foundation INPhINIT Retaining Fellowship (LCF/BQ/ DR20/11790005). DR-A thanks the funding of the Spanish Research Agency (codes FFI2017-88913-P and PID2020-118729RB-I00). IPJ also thanks the funding of the Spanish Research Agency (code PID2019-105422GB-I00).The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps to maximize patients’ benefits and optimizes limited resources. However, we claim that the opaqueness of the algorithmic black box and its absence of explainability threatens core commitments of procedural fairness such as accountability, avoidance of bias, and transparency. To illustrate this, we discuss liver transplantation as a case of critical medical resources in which the lack of explainability in AI-based allocation algorithms is procedurally unfair. Finally, we provide a number of ethical recommendations for when considering the use of unexplainable algorithms in the distribution of health-related resources.Funding for open access publishing: Universidad de Granada/ CBUAProject “Detección y eliminación de sesgos en algoritmos de triaje y localización para la COVID-19” of the call Ayudas Fundación BBVA a Equipos de Investigación Científica SARS-CoV-2 y COVID-19, en el área de HumanidadesLa Caixa Foundation INPhINIT Retaining Fellowship (LCF/BQ/ DR20/11790005)Spanish Research Agency (codes FFI2017-88913-P and PID2020-118729RB-I00)Spanish Research Agency (code PID2019-105422GB-I00
    corecore