43 research outputs found

    A synthetic data generation system for myalgic encephalomyelitis/chronic fatigue syndrome questionnaires

    Get PDF
    Artificial intelligence or machine-learning-based models have proven useful for better understanding various diseases in all areas of health science. Myalgic Encephalomyelitis or chronic fatigue syndrome (ME/CFS) lacks objective diagnostic tests. Some validated questionnaires are used for diagnosis and assessment of disease progression. The availability of a sufficiently large database of these questionnaires facilitates research into new models that can predict profiles that help to understand the etiology of the disease. A synthetic data generator provides the scientific community with databases that preserve the statistical properties of the original, free of legal restrictions, for use in research and education. The initial databases came from the Vall Hebron Hospital Specialized Unit in Barcelona, Spain. 2522 patients diagnosed with ME/CFS were analyzed. Their answers to questionnaires related to the symptoms of this complex disease were used as training datasets. They have been fed for deep learning algorithms that provide models with high accuracy [0.69–0.81]. The final model requires SF-36 responses and returns responses from HAD, SCL-90R, FIS8, FIS40, and PSQI questionnaires. A highly reliable and easy-to-use synthetic data generator is offered for research and educational use in this disease, for which there is currently no approved treatment

    A synthetic data generation system for myalgic encephalomyelitis/chronic fatigue syndrome questionnaires

    Get PDF
    Computational models; Data acquisitionModelos computacionales; Adquisición de datosModels computacionals; Adquisició de dadesArtificial intelligence or machine-learning-based models have proven useful for better understanding various diseases in all areas of health science. Myalgic Encephalomyelitis or chronic fatigue syndrome (ME/CFS) lacks objective diagnostic tests. Some validated questionnaires are used for diagnosis and assessment of disease progression. The availability of a sufficiently large database of these questionnaires facilitates research into new models that can predict profiles that help to understand the etiology of the disease. A synthetic data generator provides the scientific community with databases that preserve the statistical properties of the original, free of legal restrictions, for use in research and education. The initial databases came from the Vall Hebron Hospital Specialized Unit in Barcelona, Spain. 2522 patients diagnosed with ME/CFS were analyzed. Their answers to questionnaires related to the symptoms of this complex disease were used as training datasets. They have been fed for deep learning algorithms that provide models with high accuracy [0.69–0.81]. The final model requires SF-36 responses and returns responses from HAD, SCL-90R, FIS8, FIS40, and PSQI questionnaires. A highly reliable and easy-to-use synthetic data generator is offered for research and educational use in this disease, for which there is currently no approved treatment

    Unsupervised cluster analysis reveals distinct subtypes of ME/CFS patients based on peak oxygen consumption and SF-36 scores

    Get PDF
    PURPOSE: Myalgic encephalomyelitis, commonly referred to as chronic fatigue syndrome (ME/CFS), is a severe, disabling chronic disease and an objective assessment of prognosis is crucial to evaluate the efficacy of future drugs. Attempts are ongoing to find a biomarker to objectively assess the health status of (ME/CFS), patients. This study therefore aims to demonstrate that oxygen consumption is a biomarker of ME/CFS provides a method to classify patients diagnosed with ME/CFS based on their responses to the Short Form-36 (SF-36) questionnaire, which can predict oxygen consumption using cardiopulmonary exercise testing (CPET). METHODS: Two datasets were used in the study. The first contained SF-36 responses from 2,347 validated records of ME/CFS diagnosed participants, and an unsupervised machine learning model was developed to cluster the data. The second dataset was used as a validation set and included the cardiopulmonary exercise test (CPET) results of 239 participants diagnosed with ME/CFS. Participants from this dataset were grouped by peak oxygen consumption according to Weber's classification. The SF-36 questionnaire was correctly completed by only 92 patients, who were clustered using the machine learning model. Two categorical variables were then entered into a contingency table: the cluster with values {0,1} and Weber classification {A, B, C, D} were assigned. Finally, the Chi-square test of independence was used to assess the statistical significance of the relationship between the two parameters. FINDINGS: The results indicate that the Weber classification is directly linked to the score on the SF-36 questionnaire. Furthermore, the 36-response matrix in the machine learning model was shown to give more reliable results than the subscale matrix (p - value < 0.05) for classifying patients with ME/CFS. IMPLICATIONS: Low oxygen consumption on CPET can be considered a biomarker in patients with ME/CFS. Our analysis showed a close relationship between the cluster based on their SF-36 questionnaire score and the Weber classification, which was based on peak oxygen consumption during CPET. The dataset for the training model comprised raw responses from the SF-36 questionnaire, which is proven to better preserve the original information, thus improving the quality of the model

    Unsupervised Cluster Analysis Reveals Distinct Subtypes of ME/CFS Patients Based on Peak Oxygen Consumption and SF-36 Scores

    Get PDF
    Biomarker; Cardiopulmonary exercise test; Chronic fatigue syndromeBiomarcador; Prova d'esforç cardiopulmonar; Síndrome de fatiga crònicaBiomarcador; Prueba de esfuerzo cardiopulmonar; Síndrome de fatiga crónicaPurpose Myalgic encephalomyelitis, commonly referred to as chronic fatigue syndrome (ME/CFS), is a severe, disabling chronic disease and an objective assessment of prognosis is crucial to evaluate the efficacy of future drugs. Attempts are ongoing to find a biomarker to objectively assess the health status of (ME/CFS), patients. This study therefore aims to demonstrate that oxygen consumption is a biomarker of ME/CFS provides a method to classify patients diagnosed with ME/CFS based on their responses to the Short Form-36 (SF-36) questionnaire, which can predict oxygen consumption using cardiopulmonary exercise testing (CPET). Methods Two datasets were used in the study. The first contained SF-36 responses from 2,347 validated records of ME/CFS diagnosed participants, and an unsupervised machine learning model was developed to cluster the data. The second dataset was used as a validation set and included the cardiopulmonary exercise test (CPET) results of 239 participants diagnosed with ME/CFS. Participants from this dataset were grouped by peak oxygen consumption according to Weber's classification. The SF-36 questionnaire was correctly completed by only 92 patients, who were clustered using the machine learning model. Two categorical variables were then entered into a contingency table: the cluster with values {0,1} and Weber classification {A, B, C, D} were assigned. Finally, the Chi-square test of independence was used to assess the statistical significance of the relationship between the two parameters. Findings The results indicate that the Weber classification is directly linked to the score on the SF-36 questionnaire. Furthermore, the 36-response matrix in the machine learning model was shown to give more reliable results than the subscale matrix (p − value < 0.05) for classifying patients with ME/CFS. Implications Low oxygen consumption on CPET can be considered a biomarker in patients with ME/CFS. Our analysis showed a close relationship between the cluster based on their SF-36 questionnaire score and the Weber classification, which was based on peak oxygen consumption during CPET. The dataset for the training model comprised raw responses from the SF-36 questionnaire, which is proven to better preserve the original information, thus improving the quality of the model

    Applying multilayer analysis to morphological, structural, and functional brain networks to identify relevant dysfunction patterns

    Get PDF
    In recent years, research on network analysis applied to MRI data has advanced significantly. However, the majority of the studies are limited to single networks obtained from resting-state fMRI, diffusion MRI, or gray matter probability maps derived from T1 images. Although a limited number of previous studies have combined two of these networks, none have introduced a framework to combine morphological, structural, and functional brain connectivity networks. The aim of this study was to combine the morphological, structural, and functional information, thus defining a new multilayer network perspective. This has proved advantageous when jointly analyzing multiple types of relational data from the same objects simultaneously using graph- mining techniques. The main contribution of this research is the design, development, and validation of a framework that merges these three layers of information into one multilayer network that links and relates the integrity of white matter connections with gray matter probability maps and resting-state fMRI. To validate our framework, several metrics from graph theory are expanded and adapted to our specific domain characteristics. This proof of concept was applied to a cohort of people with multiple sclerosis, and results show that several brain regions with a synchronized connectivity deterioration could be identified

    Anonymizing graphs: measuring quality for clustering

    Get PDF
    Anonymization of graph-based data is a problem, which has been widely studied last years, and several anonymization methods have been developed. Information loss measures have been carried out to evaluate the noise introduced in the anonymized data. Generic information loss measures ignore the intended anonymized data use. When data has to be released to third-parties, and there is no control on what kind of analyses users could do, these measures are the standard ones. In this paper we study different generic information loss measures for graphs comparing such measures to the cluster-specific ones. We want to evaluate whether the generic information loss measures are indicative of the usefulness of the data for subsequent data mining processes. © 2014, Springer-Verlag London.This work was partly funded by the Spanish Government through projects TIN2011-27076-C03-02 “CO-PRIVACY”, CONSOLIDER INGENIO 2010 CSD2007-0004 “ARES” and TIN2010-15764 “N-KHRONOUS”.Peer Reviewe

    Privacy-preserving and data utility in graph mining

    No full text
    En los últimos años, ha sido puesto a disposición del público una gran cantidad de los datos con formato de grafo. Incrustado en estos datos hay información privada acerca de los usuarios que aparecen en ella. Por lo tanto, los propietarios de datos deben respetar la privacidad de los usuarios antes de liberar los conjuntos de datos a terceros. En este escenario, los procesos de anonimización se convierten en un proceso muy importante. Sin embargo, los procesos de anonimización introducen, generalmente, algún tipo de ruido en los datos anónimos y también en sus resultados en procesos de minería de datos. Generalmente, cuanto mayor la privacidad, mayor será el ruido. Por lo tanto, la utilidad de los datos es un factor importante a tener en cuenta en los procesos de anonimización. El equilibrio necesario entre la privacidad de datos y utilidad de éstos puede mejorar mediante el uso de medidas y métricas para guiar el proceso de anonimización, de tal forma que se minimice la pérdida de información. En esta tesis hemos trabajo los campos de la preservación de la privacidad del usuario en las redes sociales y la utilidad y calidad de los datos publicados. Un compromiso entre ambos campos es un punto crítico para lograr buenos métodos de anonimato, que permitan mejorar los posteriores procesos de minería de datos. Parte de esta tesis se ha centrado en la utilidad de los datos y la pérdida de información. En primer lugar, se ha estudiado la relación entre las medidas de pérdida de información genéricas y las específicas basadas en clustering, con el fin de evaluar si las medidas genéricas de pérdida de información son indicativas de la utilidad de los datos para los procesos de minería de datos posteriores. Hemos encontrado una fuerte correlación entre algunas medidas genéricas de pérdida de información (average distance, betweenness centrality, closeness centrality, edge intersection, clustering coefficient y transitivity) y el índice de precisión en los resultados de varios algoritmos de clustering, lo que demuestra que estas medidas son capaces de predecir el perturbación introducida en los datos anónimos. En segundo lugar, se han presentado dos medidas para reducir la pérdida de información en los procesos de modificación de grafos. La primera, Edge neighbourhood centrality, se basa en el flujo de información de a través de la vecindad a distancia 1 de una arista específica. El segundo se basa en el core number sequence y permite conservar mejor la estructura subyacente, mejorando la utilidad de los datos. Hemos demostrado que ambos métodos son capaces de preservar las aristas más importantes del grafo, manteniendo mejor las propiedades básicas estructurales y espectrales. El otro tema importante de esta tesis ha sido los métodos de preservación de la privacidad. Hemos presentado nuestro algoritmo de base aleatoria, que utiliza el concepto de Edge neighbourhood centrality para guiar el proceso de modificación preservando los bordes más importantes del grafo, logrando una menor pérdida de información y una mayor utilidad de los datos. Por último, se han desarrollado dos algoritmos diferentes para el k-anonimato en los grafos. En primer lugar, se ha presentado un algoritmo basado en la computación evolutiva. Aunque este método nos permite cumplir el nivel de privacidad deseado, presenta dos inconvenientes: la pérdida de información es bastante grande en algunas propiedades estructurales del grafo y no es lo suficientemente rápido para trabajar con grandes redes. Por lo tanto, un segundo algoritmo se ha presentado, que utiliza el micro-agregación univariante para anonimizar la secuencia de grados. Este método es cuasi-óptimo y se traduce en una menor pérdida de información y una mejor utilidad de los datos.In recent years, an explosive increase of graph-formatted data has been made publicly available. Embedded within this data there is private information about users who appear in it. Therefore, data owners must respect the privacy of users before releasing datasets to third parties. In this scenario, anonymization processes become an important concern. However, anonymization processes usually introduce some kind of noise in the anonymous data, altering the data and also their results on graph mining processes. Generally, the higher the privacy, the larger the noise. Thus, data utility is an important factor to consider in anonymization processes. The necessary trade-off between data privacy and data utility can be reached by using measures and metrics to lead the anonymization process to minimize the information loss, and therefore, to maximize the data utility. In this thesis we have covered the fields of user's privacy-preserving in social networks and the utility and quality of the released data. A trade-off between both fields is a critical point to achieve good anonymization methods for the subsequent graph mining processes. Part of this thesis has focused on data utility and information loss. Firstly, we have studied the relation between the generic information loss measures and the clustering-specific ones, in order to evaluate whether the generic information loss measures are indicative of the usefulness of the data for subsequent data mining processes. We have found strong correlation between some generic information loss measures (average distance, betweenness centrality, closeness centrality, edge intersection, clustering coefficient and transitivity) and the precision index over the results of several clustering algorithms, demonstrating that these measures are able to predict the perturbation introduced in anonymous data. Secondly, two measures to reduce the information loss on graph modification processes have been presented. The first one, Edge neighbourhood centrality, is based on information flow throw 1-neighbourhood of a specific edge in the graph. The second one is based on the core number sequence and it preserves better the underlying graph structure, retaining more data utility. By an extensive experimental set up, we have demonstrated that both methods are able to preserve the most important edges in the network, keeping the basic structural and spectral properties close to the original ones. The other important topic of this thesis has been privacy-preserving methods. We have presented our random-based algorithm, which utilizes the concept of Edge neighbourhood centrality to drive the edge modification process to better preserve the most important edges in the graph, achieving lower information loss and higher data utility on the released data. Our method obtains a better trade-off between data utility and data privacy than other methods. Finally, two different approaches for k-degree anonymity on graphs have been developed. First, an algorithm based on evolutionary computing has been presented and tested on different small and medium real networks. Although this method allows us to fulfil the desired privacy level, it presents two main drawbacks: the information loss is quite large in some graph structural properties and it is not fast enough to work with large networks. Therefore, a second algorithm has been presented, which uses the univariate micro-aggregation to anonymize the degree sequence and reduce the distance from the original one. This method is quasi-optimal and it results in lower information loss and better data utility

    Predicting Energy Generation Using Forecasting Techniques in Catalan Reservoirs

    Get PDF
    Reservoirs are natural or artificial lakes used as a source of water supply for society daily applications. In addition, hydroelectric power plants produce electricity while water flows through the reservoir. However, reservoirs are limited natural resources since water levels vary according to annual rainfalls and other natural events, and consequently, the energy generation. Therefore, forecasting techniques are helpful to predict water level, and thus, electricity production. This paper examines state-of-the-art methods to predict the water level in Catalan reservoirs comparing two approaches: using the water level uniquely, uni-variant; and adding meteorological data, multi-variant. With respect to relating works, our contribution includes a longer times series prediction keeping a high precision. The results return that combining Support Vector Machine and the multi-variant approach provides the highest precision with an R 2 value of 0.99

    Analyzing the Impact of Edge Modifications on Networks

    No full text
    Most of recent anonymization algorithms for networks are based on edge modification, i.e., adding and/or deleting edges on a network. But, no one considers the edge¿s relevance in order to decide which edges may be removed and which ones must be preserved. Considering edge¿s relevance can help us to improve data utility and reduce information loss. In this paper we analyse different measures for quantifying edge¿s relevance. Also, we present a new simple metric for edge¿s relevance on medium or large networks.Peer Reviewe
    corecore