233 research outputs found

    Artificial Intelligence and Liver Transplant:Predicting Survival of Individual Grafts

    Get PDF
    The demand for liver transplantation far outstrips the supply of deceased donor organs, and so, listing and allocation decisions aim to maximize utility. Most existing methods for predicting transplant outcomes use basic methods, such as regression modeling, but newer artificial intelligence (AI) techniques have the potential to improve predictive accuracy. The aim was to perform a systematic review of studies predicting graft outcomes following deceased donor liver transplantation using AI techniques and to compare these findings to linear regression and standard predictive modeling: donor risk index (DRI), Model for End‐Stage Liver Disease (MELD), and Survival Outcome Following Liver Transplantation (SOFT). After reviewing available article databases, a total of 52 articles were reviewed for inclusion. Of these articles, 9 met the inclusion criteria, which reported outcomes from 18,771 liver transplants. Artificial neural networks (ANNs) were the most commonly used methodology, being reported in 7 studies. Only 2 studies directly compared machine learning (ML) techniques to liver scoring modalities (i.e., DRI, SOFT, and balance of risk [BAR]). Both studies showed better prediction of individual organ survival with the optimal ANN model, reporting an area under the receiver operating characteristic curve (AUROC) 0.82 compared with BAR (0.62) and SOFT (0.57), and the other ANN model gave an AUC ROC of 0.84 compared with a DRI (0.68) and SOFT (0.64). AI techniques can provide high accuracy in predicting graft survival based on donors and recipient variables. When compared with the standard techniques, AI methods are dynamic and are able to be trained and validated within every population. However, the high accuracy of AI may come at a cost of losing explainability (to patients and clinicians) on how the technology works

    Using machine learning methods to determine a typology of patients with HIV-HCV infection to be treated with antivirals

    Get PDF
    Several European countries have established criteria for prioritising initiation of treatment in patients infected with the hepatitis C virus (HCV) by grouping patients according to clinical characteristics. Based on neural network techniques, our objective was to identify those factors for HIV/HCV co-infected patients (to which clinicians have given careful consideration before treatment uptake) that have not being included among the prioritisation criteria. This study was based on the Spanish HERACLES cohort (NCT02511496) (April-September 2015, 2940 patients) and involved application of different neural network models with different basis functions (product-unit, sigmoid unit and radial basis function neural networks) for automatic classification of patients for treatment. An evolutionary algorithm was used to determine the architecture and estimate the coefficients of the model. This machine learning methodology found that radial basis neural networks provided a very simple model in terms of the number of patient characteristics to be considered by the classifier (in this case, six), returning a good overall classification accuracy of 0.767 and a minimum sensitivity (for the classification of the minority class, untreated patients) of 0.550. Finally, the area under the ROC curve was 0.802, which proved to be exceptional. The parsimony of the model makes it especially attractive, using just eight connections. The independent variable "recent PWID" is compulsory due to its importance. The simplicity of the model means that it is possible to analyse the relationship between patient characteristics and the probability of belonging to the treated group

    Agrupamiento, predicción y clasificación ordinal para series temporales utilizando técnicas de machine learning: aplicaciones

    Get PDF
    In the last years, there has been an increase in the number of fields improving their standard processes by using machine learning (ML) techniques. The main reason for this is that the vast amount of data generated by these processes is difficult to be processed by humans. Therefore, the development of automatic methods to process and extract relevant information from these data processes is of great necessity, giving that these approaches could lead to an increase in the economic benefit of enterprises or to a reduction in the workload of some current employments. Concretely, in this Thesis, ML approaches are applied to problems concerning time series data. Time series is a special kind of data in which data points are collected chronologically. Time series are present in a wide variety of fields, such as atmospheric events or engineering applications. Besides, according to the main objective to be satisfied, there are different tasks in the literature applied to time series. Some of them are those on which this Thesis is mainly focused: clustering, classification, prediction and, in general, analysis. Generally, the amount of data to be processed is huge, arising the need of methods able to reduce the dimensionality of time series without decreasing the amount of information. In this sense, the application of time series segmentation procedures dividing the time series into different subsequences is a good option, given that each segment defines a specific behaviour. Once the different segments are obtained, the use of statistical features to characterise them is an excellent way to maximise the information of the time series and simultaneously reducing considerably their dimensionality. In the case of time series clustering, the objective is to find groups of similar time series with the idea of discovering interesting patterns in time series datasets. In this Thesis, we have developed a novel time series clustering technique. The aim of this proposal is twofold: to reduce as much as possible the dimensionality and to develop a time series clustering approach able to outperform current state-of-the-art techniques. In this sense, for the first objective, the time series are segmented in order to divide the them identifying different behaviours. Then, these segments are projected into a vector of statistical features aiming to reduce the dimensionality of the time series. Once this preprocessing step is done, the clustering of the time series is carried out, with a significantly lower computational load. This novel approach has been tested on all the time series datasets available in the University of East Anglia and University of California Riverside (UEA/UCR) time series classification (TSC) repository. Regarding time series classification, two main paths could be differentiated: firstly, nominal TSC, which is a well-known field involving a wide variety of proposals and transformations applied to time series. Concretely, one of the most popular transformation is the shapelet transform (ST), which has been widely used in this field. The original method extracts shapelets from the original time series and uses them for classification purposes. Nevertheless, the full enumeration of all possible shapelets is very time consuming. Therefore, in this Thesis, we have developed a hybrid method that starts with the best shapelets extracted by using the original approach with a time constraint and then tunes these shapelets by using a convolutional neural network (CNN) model. Secondly, time series ordinal classification (TSOC) is an unexplored field beginning with this Thesis. In this way, we have adapted the original ST to the ordinal classification (OC) paradigm by proposing several shapelet quality measures taking advantage of the ordinal information of the time series. This methodology leads to better results than the state-of-the-art TSC techniques for those ordinal time series datasets. All these proposals have been tested on all the time series datasets available in the UEA/UCR TSC repository. With respect to time series prediction, it is based on estimating the next value or values of the time series by considering the previous ones. In this Thesis, several different approaches have been considered depending on the problem to be solved. Firstly, the prediction of low-visibility events produced by fog conditions is carried out by means of hybrid autoregressive models (ARs) combining fixed-size and dynamic windows, adapting itself to the dynamics of the time series. Secondly, the prediction of convective cloud formation (which is a highly imbalance problem given that the number of convective cloud events is much lower than that of non-convective situations) is performed in two completely different ways: 1) tackling the problem as a multi-objective classification task by the use of multi-objective evolutionary artificial neural networks (MOEANNs), in which the two conflictive objectives are accuracy of the minority class and the global accuracy, and 2) tackling the problem from the OC point of view, in which, in order to reduce the imbalance degree, an oversampling approach is proposed along with the use of OC techniques. Thirdly, the prediction of solar radiation is carried out by means of evolutionary artificial neural networks (EANNs) with different combinations of basis functions in the hidden and output layers. Finally, the last challenging problem is the prediction of energy flux from waves and tides. For this, a multitask EANN has been proposed aiming to predict the energy flux at several prediction time horizons (from 6h to 48h). All these proposals and techniques have been corroborated and discussed according to physical and atmospheric models. The work developed in this Thesis is supported by 11 JCR-indexed papers in international journals (7 Q1, 3 Q2, 1 Q3), 11 papers in international conferences, and 4 papers in national conferences

    Data Mining-based Survival Analysis and Simulation Modeling for Lung Transplant

    Get PDF
    The objective of this research is to develop a decision support methodology for the lung transplant procedure by investigating the UNOS nation-wide dataset via data mining-based survival analysis and simulation-based optimization. Traditional statistical techniques have various limitations which hinder the exploration of the information hidden under the voluminous data. The deployment of the structural equation modeling integrated with decision trees provides a more effective matching between the donor organ and the recipient. Such an integration preceded by powerful data mining models to determine which variables to include for survival analysis is validated via the simulation-based optimization.The suggested data mining-based survival analysis was superior to the conventional statistical methods in predicting the lung graft survivability and in determining the critical variables to include in organ matching and allocation. The proposed matching index derived via structural equation model-based decision trees was validated to be a more effective priority-ranking mechanism than the current lung allocation scoring system. This validation was established by a simulation-based optimization model. It was demonstrated that with this novel matching index, a substantial improvement was achieved in the survival rate while only a short delay was caused in the average waiting time of candidate patients on the list. Furthermore, via the response surface methodology-based simulation optimization the optimal weighting scheme for the components of the novel matching index was determined by jointly optimizing the lung transplant performance measures, namely, the justice principle in terms of the waiting time and the utility principle in terms of the survival rate. The study presents uniqueness in that it provides a means to integrate the data mining modeling as well as simulation optimization with the survival analysis so that more useful information hidden in the large amount of data can be discovered. The developed methodology improves the modeling of matching and allocation system in terms of both interpretability and predictability. This will be beneficial to medical professionals at a great deal.Industrial Engineering & Managemen

    Utilidad de las redes neuronales artificiales en la asignación donante-receptor en trasplante hepático

    Get PDF
    El disbalance existente entre el número de donantes y receptores en trasplante hepático hizo que se comenzaran a utilizar donantes con criterios expandidos. Este tipo de donantes, por definición, tienen mayor riesgo de desarrollar una pobre función inicial (IPF), una no función primaria (PNF) del injerto e incluso pueden condicionar una pérdida tardía del mismo. Es sabido que tanto factores del donante como del receptor y factores propios histológicos, de la extracción, de la preservación e incluso del transporte y el implante están implicados en la probabilidad de disfunción orgánica. Así, establecer cuál es la probabilidad de disfunción del injerto mediante la creación de pares donante-receptor en función del riesgo de pérdida del mismo y de la evolución post-trasplante influenciada por multiples variables que acontecen durante todo el periodo de donación-implante-evolución es el objetivo del presente estudio. Para ello, se utilizarán redes neuronales artificiales, que no son otra cosa que algoritmos de cálculo que emulan la capacidad de aprendizaje del sistema nervioso, de tal manera, que la red neuronal identifica un patrón a seguir entre las variables de entrada y las de la salida, teniendo como variable final la supervivencia del injerto y del receptor. Existen actualmente algoritmos que establecen el riesgo de fallo del injerto después de un trasplante hepático, sin embargo, pocos estudios han intentado desarrollar modelos capaces de predecir la supervivencia post-trasplante (1). Estos estudios están basados en experiencias unicéntricas, que carecen de número suficientemente grande de casos para establecer modelos con variables pormenorizadas, y con resultados difíciles de validar fuera de los centros donde se llevaron a cabo. Además, algunos de ellos se han realizado sobre series relativamente antiguas y cuya extrapolación del modelo propuesto al momento actual del trasplante es difícil. Recientemente cuatro modelos de predicción de riesgo basados en variables del donante o del donante y receptor han permitido obtener modelos más realistas que no adolecen de estos inconvenientes (2,3,4,5). En cuanto a las redes neuronales, consisten, desde un punto de vista técnico, en un grupo de unidades de proceso (nodos) que se asemejan a las neuronas al estar interconectadas por medio de un entramado de relaciones (pesos) análogas al concepto de conexiones sinápticas en el sistema nervioso. A partir de los nodos de entrada, la señal progresa a través de la red hasta proporcionar una respuesta en forma de nivel de activación de los nodos de salida. En un contexto médico, el entrenamiento consistiría en presentar a la red, de forma iterativa, los valores de distintas variables clínicas (en forma de valores de la capa de entrada) de cada paciente y conseguir que la red sea capaz de predecir el estado final observado en cada paciente (indicados por el estado de las capas de salida de la red) de la manera más precisa posible (6,7). La posibilidad de resolver problemas difíciles es gracias a los principios de las redes neuronales. Los cinco más importantes son: aprendizaje adaptativo, autoorganización, tolerancia a fallos, operación en tiempo real y fácil inserción en la tecnología existente. Estas características las hacen perfectas para resolver problemas en campos tan variados como la agricultura, la bibliometria, economía y medicina (8,9,10)

    Transjugular intrahepatic portosystemic stent-shunt

    Get PDF

    Predictors of Death in the Liver Transplantation Adult Candidates: An Artificial Neural Networks and Support Vector Machine Hybrid-Based Cohort Study

    Get PDF
    Background: Model for end-stage liver disease (MELD) is currently used for liver transplantation (LT) allocation, however, it is not a sufficient criterion. Objective: This current study aims to perform a hybrid neural network analysis of different data, make a decision tree and finally design a decision support system for improving LT prioritization.Material and Methods: In this cohort follow-up-based study, baseline characteristics of 1947 adult patients, who were candidates for LT in Shiraz Organ Transplant Center, Iran, were assessed and followed for two years and those who died before LT due to the end-stage liver disease were considered as dead cases, while others considered as alive cases. A well-organized checklist was filled for each patient. Analysis of the data was performed using artificial neural networks (ANN) and support vector machines (SVM). Finally, a decision tree was illustrated and a user friendly decision support system was designed to assist physicians in LT prioritization. Results: Between all MELD types, MELD-Na was a stronger determinant of LT candidates’ survival. Both ANN and SVM showed that besides MELD-Na, age and ALP (alkaline phosphatase) are the most important factors, resulting in death in LT candidates. It was cleared that MELD-Na <23, age <53 and ALP <257 IU/L were the best predictors of survival in LT candidates. An applicable decision support system was designed in this study using the above three factors.  Conclusion: Therefore, Meld-Na, age and ALP should be used for LT allocation. The presented decision support system in this study will be helpful in LT prioritization by LT allocators

    A non-invasive diagnostic system for early assessment of acute renal transplant rejection.

    Get PDF
    Early diagnosis of acute renal transplant rejection (ARTR) is of immense importance for appropriate therapeutic treatment administration. Although the current diagnostic technique is based on renal biopsy, it is not preferred due to its invasiveness, recovery time (1-2 weeks), and potential for complications, e.g., bleeding and/or infection. In this thesis, a computer-aided diagnostic (CAD) system for early detection of ARTR from 4D (3D + b-value) diffusion-weighted (DW) MRI data is developed. The CAD process starts from a 3D B-spline-based data alignment (to handle local deviations due to breathing and heart beat) and kidney tissue segmentation with an evolving geometric (level-set-based) deformable model. The latter is guided by a voxel-wise stochastic speed function, which follows from a joint kidney-background Markov-Gibbs random field model accounting for an adaptive kidney shape prior and for on-going visual kidney-background appearances. A cumulative empirical distribution of apparent diffusion coefficient (ADC) at different b-values of the segmented DW-MRI is considered a discriminatory transplant status feature. Finally, a classifier based on deep learning of a non-negative constrained stacked auto-encoder is employed to distinguish between rejected and non-rejected renal transplants. In the “leave-one-subject-out” experiments on 53 subjects, 98% of the subjects were correctly classified (namely, 36 out of 37 rejected transplants and 16 out of 16 nonrejected ones). Additionally, a four-fold cross-validation experiment was performed, and an average accuracy of 96% was obtained. These experimental results hold promise of the proposed CAD system as a reliable non-invasive diagnostic tool

    Predicción ordinal utilizando metodologías de aprendizaje automático: Aplicaciones

    Get PDF
    Artificial Intelligence is part of our everyday life, not only as consumers but also in most of the productive areas since companies can optimize most of their processes with all the different tools that it can provide. There is one topic that has been especially useful in the artificial intelligence implementation process which is machine learning, as it can be used in most of the practical applications that appear in real-life problems. Machine learning is the part of artificial intelligence that focuses on developing models that are able to learn a function that transforms input data into a desired output. One of the most important parts in machine learning is the model, and one of the most successful models in the state-of-the-art approaches is the artificial neural network. This is why the current thesis, for its first challenge, will study how to improve them to be able to learn more complex problems without needing to apply computationally costly training algorithms. The next important step to improve the model’s performance is to optimize the algorithms that are used to let them learn how to transform the inputs into the desired outputs, and the second challenge of this thesis is to optimize the computational cost of evolutionary algorithms, which are one of the best options to optimize ANNs due to their flexibility when training them. Ordinal classification (also known as ordinal regression) is an area of machine learning that can be applied to many real-life problems since it takes into account the order of the classes, which is an important fact in many real-life problems. In the area of social sciences, we will study how potential countries are helping the poorer ones the most, and then we will perform a deeper study to classify the level of globalisation of a country. These studies will be performed by applying the models and algorithms that were developed in the first stage of the thesis. After these first works, continuing with the ordinal classification approaches, we focused on the area of medicine, where there are many examples of applications of these techniques, e.g., any disease that may have progression is usually classified in different stages depending on its severity from low to high. In our case, this thesis will study how a treatment (liver transplantation) can affect different patients (survival time of the graft), and therefore decide which patient is the most appropriate for that specific treatment. The last chapter of the thesis will delve in ordinal classification to achieve ordinal prediction of time series. Time series have been usually processed with classical statistical techniques since machine learning models that focused on time series were too costly. However, currently, with the arrival of powerful computation machines together with the evolution of models such as recurrent neural networks, classic statistical techniques can hardly be competitive versus machine learning. In areas such as economics, social sciences, meteorology or medicine, time series are the main source of information, and they need to be correctly processed to be useful. The most common consideration when dealing with time series is to learn from past values to predict future ones, and the works in this last chapter will focus on performing ordinal predictions of WPREs in wind farms, creating novel models and methodologies. The thesis will conclude with a work that implements a deep neural network to predict WPREs in multiple wind farms at the same time; therefore, this model would allow predicting WPREs in a global area instead of in a specific geographical point
    corecore