1,038 research outputs found

    A screening method for mild cognitive impairment in elderly individuals combining bioimpedance and MMSE

    Get PDF
    We investigated a screening method for mild cognitive impairment (MCI) that combined bioimpedance features and the Korean Mini-Mental State Examination (K-MMSE) score. Data were collected from 539 subjects aged 60 years or older at the Gwangju Alzheimer’s & Related Dementias (GARD) Cohort Research Center, A total of 470 participants were used for the analysis, including 318 normal controls and 152 MCI participants. We measured bioimpedance, K-MMSE, and the Seoul Neuropsychological Screening Battery (SNSB-II). We developed a multiple linear regression model to predict MCI by combining bioimpedance variables and K-MMSE total score and compared the model’s accuracy with SNSB-II domain scores by the area under the receiver operating characteristic curve (AUROC). We additionally compared the model performance with several machine learning models such as extreme gradient boosting, random forest, support vector machine, and elastic net. To test the model performances, the dataset was divided into a training set (70%) and a test set (30%). The AUROC values of SNSB-II scores were 0.803 in both sexes, 0.840 for males, and 0.770 for females. In the combined model, the AUROC values were 0.790 (0.773) for males (and females), which were significantly higher than those from the model including MMSE scores alone (0.723 for males and 0.622 for females) or bioimpedance variables alone (0.640 for males and 0.615 for females). Furthermore, the accuracies of the combined model were comparable to those of machine learning models. The bioimpedance-MMSE combined model effectively distinguished the MCI participants and suggests a technique for rapid and improved screening of the elderly population at risk of cognitive impairment

    Design of new algorithms for gene network reconstruction applied to in silico modeling of biomedical data

    Get PDF
    Programa de Doctorado en Biotecnología, Ingeniería y Tecnología QuímicaLínea de Investigación: Ingeniería, Ciencia de Datos y BioinformáticaClave Programa: DBICódigo Línea: 111The root causes of disease are still poorly understood. The success of current therapies is limited because persistent diseases are frequently treated based on their symptoms rather than the underlying cause of the disease. Therefore, biomedical research is experiencing a technology-driven shift to data-driven holistic approaches to better characterize the molecular mechanisms causing disease. Using omics data as an input, emerging disciplines like network biology attempt to model the relationships between biomolecules. To this effect, gene co- expression networks arise as a promising tool for deciphering the relationships between genes in large transcriptomic datasets. However, because of their low specificity and high false positive rate, they demonstrate a limited capacity to retrieve the disrupted mechanisms that lead to disease onset, progression, and maintenance. Within the context of statistical modeling, we dove deeper into the reconstruction of gene co-expression networks with the specific goal of discovering disease-specific features directly from expression data. Using ensemble techniques, which combine the results of various metrics, we were able to more precisely capture biologically significant relationships between genes. We were able to find de novo potential disease-specific features with the help of prior biological knowledge and the development of new network inference techniques. Through our different approaches, we analyzed large gene sets across multiple samples and used gene expression as a surrogate marker for the inherent biological processes, reconstructing robust gene co-expression networks that are simple to explore. By mining disease-specific gene co-expression networks we come up with a useful framework for identifying new omics-phenotype associations from conditional expression datasets.In this sense, understanding diseases from the perspective of biological network perturbations will improve personalized medicine, impacting rational biomarker discovery, patient stratification and drug design, and ultimately leading to more targeted therapies.Universidad Pablo de Olavide de Sevilla. Departamento de Deporte e Informátic

    Optimization of neural networks for deep learning and applications to CT image segmentation

    Full text link
    [eng] During the last few years, AI development in deep learning has been going so fast that even important researchers, politicians, and entrepreneurs are signing petitions to try to slow it down. The newest methods for natural language processing and image generation are achieving results so unbelievable that people are seriously starting to think they can be dangerous for society. In reality, they are not dangerous (at the moment) even if we have to admit we reached a point where we have no more control over the flux of data inside the deep networks. It is impossible to open a modern deep neural network and interpret how it processes the information and, in many cases, explain how or why it gives back that particular result. One of the goals of this doctoral work has been to study the behavior of weights in convolutional neural networks and in transformers. We hereby present a work that demonstrates how to invert 3x3 convolutions after training a neural network able to learn how to classify images, with the future aim of having precisely invertible convolutional neural networks. We demonstrate that a simple network can learn to classify images on an open-source dataset without loss in accuracy, with respect to a non-invertible one. All that with the ability to reconstruct the original image without detectable error (on 8-bit images) in up to 20 convolutions stacked in a row. We present a thorough comparison between our method and the standard. We tested the performances of the five most used transformers for image classification on an open- source dataset. Studying the embedded matrices, we have been able to provide two criteria that can help transformers learn with a training time reduction of up to 30% and with no impact on classification accuracy. The evolution of deep learning techniques is also touching the field of digital health. With tens of thousands of new start-ups and more than 1B $ of investments only in the last year, this field is growing rapidly and promising to revolutionize healthcare. In this thesis, we present several neural networks for the segmentation of lungs, lung nodules, and areas affected by pneumonia induced by COVID-19, in chest CT scans. The architecturesm we used are all residual convolutional neural networks inspired by UNet and Inception. We customized them with novel loss functions and layers studied to achieve high performances on these particular applications. The errors on the surface of nodule segmentation masks are not over 1mm in more than 99% of the cases. Our algorithm for COVID-19 lesion detection has a specificity of 100% and overall accuracy of 97.1%. In general, it surpasses the state-of-the-art in all the considered statistics, using UNet as a benchmark. Combining these with other algorithms able to detect and predict lung cancer, the whole work was presented in a European innovation program and judged of high interest by worldwide experts. With this work, we set the basis for the future development of better AI tools in healthcare and scientific investigation into the fundamentals of deep learning.[spa] Durante los últimos años, el desarrollo de la IA en el aprendizaje profundo ha ido tan rápido que Incluso importantes investigadores, políticos y empresarios están firmando peticiones para intentar para ralentizarlo. Los métodos más nuevos para el procesamiento y la generación de imágenes y lenguaje natural, están logrando resultados tan increíbles que la gente está empezando a preocuparse seriamente. Pienso que pueden ser peligrosos para la sociedad. En realidad, no son peligrosos (al menos de momento) incluso si tenemos que admitir que llegamos a un punto en el que ya no tenemos control sobre el flujo de datos dentro de las redes profundas. Es imposible abrir una moderna red neuronal profunda e interpretar cómo procesa la información y, en muchos casos, explique cómo o por qué devuelve ese resultado en particular, uno de los objetivos de este doctorado. El trabajo ha consistido en estudiar el comportamiento de los pesos en redes neuronales convolucionales y en transformadores. Por la presente presentamos un trabajo que demuestra cómo invertir 3x3 convoluciones después de entrenar una red neuronal capaz de aprender a clasificar imágenes, con el objetivo futuro de tener redes neuronales convolucionales precisamente invertibles. Nosotros queremos demostrar que una red simple puede aprender a clasificar imágenes en un código abierto conjunto de datos sin pérdida de precisión, con respecto a uno no invertible. Todo eso con la capacidad de reconstruir la imagen original sin errores detectables (en imágenes de 8 bits) en hasta 20 convoluciones apiladas en fila. Presentamos una exhaustiva comparación entre nuestro método y el estándar. Probamos las prestaciones de los cinco transformadores más utilizados para la clasificación de imágenes en abierto. conjunto de datos de origen. Al estudiar las matrices incrustadas, hemos sido capaz de proporcionar dos criterios que pueden ayudar a los transformadores a aprender con un tiempo de capacitación reducción de hasta el 30% y sin impacto en la precisión de la clasificación. La evolución de las técnicas de aprendizaje profundo también está afectando al campo de la salud digital. Con decenas de miles de nuevas empresas y más de mil millones de dólares en inversiones sólo en el año pasado, este campo está creciendo rápidamente y promete revolucionar la atención médica. En esta tesis, presentamos varias redes neuronales para la segmentación de pulmones, nódulos pulmonares, y zonas afectadas por neumonía inducida por COVID-19, en tomografías computarizadas de tórax. La arquitectura que utilizamos son todas redes neuronales convolucionales residuales inspiradas en UNet. Las personalizamos con nuevas funciones y capas de pérdida, estudiado para lograr altos rendimientos en estas aplicaciones particulares. Los errores en la superficie de las máscaras de segmentación de los nódulos no supera 1 mm en más del 99% de los casos. Nuestro algoritmo para la detección de lesiones de COVID-19 tiene una especificidad del 100% y en general precisión del 97,1%. En general supera el estado del arte en todos los aspectos considerados, estadísticas, utilizando UNet como punto de referencia. Combinando estos con otros algoritmos capaces de detectar y predecir el cáncer de pulmón, todo el trabajo se presentó en una innovación europea programa y considerado de gran interés por expertos de todo el mundo. Con este trabajo, sentamos las bases para el futuro desarrollo de mejores herramientas de IA en Investigación sanitaria y científica sobre los fundamentos del aprendizaje profundo

    Model-based deep autoencoders for clustering single-cell RNA sequencing data with side information

    Get PDF
    Clustering analysis has been conducted extensively in single-cell RNA sequencing (scRNA-seq) studies. scRNA-seq can profile tens of thousands of genes\u27 activities within a single cell. Thousands or tens of thousands of cells can be captured simultaneously in a typical scRNA-seq experiment. Biologists would like to cluster these cells for exploring and elucidating cell types or subtypes. Numerous methods have been designed for clustering scRNA-seq data. Yet, single-cell technologies develop so fast in the past few years that those existing methods do not catch up with these rapid changes and fail to fully fulfil their potential. For instance, besides profiling transcription expression levels of genes, recent single-cell technologies can capture other auxiliary information at the single-cell level, such as protein expression (multi-omics scRNA-seq) and cells\u27 spatial location information (spatial-resolved scRNA-seq). Most existing clustering methods for scRNA-seq are performed in an unsupervised manner and fail to exploit available side information for optimizing clustering performance. This dissertation focuses on developing novel computational methods for clustering scRNA-seq data. The basic models are built on a deep autoencoder (AE) framework, which is coupled with a ZINB (zero-inflated negative binomial) loss to characterize the zero-inflated and over-dispersed scRNA-seq count data. To integrate multi-omics scRNA-seq data, a multimodal autoencoder (MAE) is employed. It applies one encoder for the multimodal inputs and two decoders for reconstructing each omics of data. This model is named scMDC (Single-Cell Multi-omics Deep Clustering). Besides, it is expected that cells in spatial proximity tend to be of the same cell types. To exploit cellular spatial information available for spatial-resolved scRNA-seq (sp-scRNA-seq) data, a novel model, DSSC (Deep Spatial-constrained Single-cell Clustering), is developed. DSSC integrates the spatial information of cells into the clustering process by two steps: 1) the spatial information is encoded by using a graphical neural network model; 2) cell-to-cell constraints are built based on the spatially expression pattern of the marker genes and added in the model to guide the clustering process. DSSC is the first model which can utilize the information from both the spatial coordinates and the marker genes to guide the cell/spot clustering. For both scMDC and DSSC, a clustering loss is optimized on the bottleneck layer of autoencoder along with the learning of feature representation. Extensive experiments on both simulated and real datasets demonstrate that scMDC and DSSC boost clustering performance significantly while costing no extra time and space during the training process. These models hold great promise as valuable tools for harnessing the full potential of state-of-the-art single-cell data

    30th European Congress on Obesity (ECO 2023)

    Get PDF
    This is the abstract book of 30th European Congress on Obesity (ECO 2023

    Cerebrovascular dysfunction in cerebral small vessel disease

    Get PDF
    INTRODUCTION: Cerebral small vessel disease (SVD) is the cause of a quarter of all ischaemic strokes and is postulated to have a role in up to half of all dementias. SVD pathophysiology remains unclear but cerebrovascular dysfunction may be important. If confirmed many licensed medications have mechanisms of action targeting vascular function, potentially enabling new treatments via drug repurposing. Knowledge is limited however, as most studies assessing cerebrovascular dysfunction are small, single centre, single imaging modality studies due to the complexities in measuring cerebrovascular dysfunctions in humans. This thesis describes the development and application of imaging techniques measuring several cerebrovascular dysfunctions to investigate SVD pathophysiology and trial medications that may improve small blood vessel function in SVD. METHODS: Participants with minor ischaemic strokes were recruited to a series of studies utilising advanced MRI techniques to measure cerebrovascular dysfunction. Specifically MRI scans measured the ability of different tissues in the brain to change blood flow in response to breathing carbon dioxide (cerebrovascular reactivity; CVR) and the flow and pulsatility through the cerebral arteries, venous sinuses and CSF spaces. A single centre observational study optimised and established feasibility of the techniques and tested associations of cerebrovascular dysfunctions with clinical and imaging phenotypes. Then a randomised pilot clinical trial tested two medications’ (cilostazol and isosorbide mononitrate) ability to improve CVR and pulsatility over a period of eight weeks. The techniques were then expanded to include imaging of blood brain barrier permeability and utilised in multi-centre studies investigating cerebrovascular dysfunction in both sporadic and monogenetic SVDs. RESULTS: Imaging protocols were feasible, consistently being completed with usable data in over 85% of participants. After correcting for the effects of age, sex and systolic blood pressure, lower CVR was associated with higher white matter hyperintensity volume, Fazekas score and perivascular space counts. Lower CVR was associated with higher pulsatility of blood flow in the superior sagittal sinus and lower CSF flow stroke volume at the foramen magnum. Cilostazol and isosorbide mononitrate increased CVR in white matter. The CVR, intra-cranial flow and pulsatility techniques, alongside blood brain barrier permeability and microstructural integrity imaging were successfully employed in a multi-centre observational study. A clinical trial assessing the effects of drugs targeting blood pressure variability is nearing completion. DISCUSSION: Cerebrovascular dysfunction in SVD has been confirmed and may play a more direct role in disease pathogenesis than previously established risk factors. Advanced imaging measures assessing cerebrovascular dysfunction are feasible in multi-centre studies and trials. Identifying drugs that improve cerebrovascular dysfunction using these techniques may be useful in selecting candidates for definitive clinical trials which require large sample sizes and long follow up periods to show improvement against outcomes of stroke and dementia incidence and cognitive function

    Riemannian statistical techniques with applications in fMRI

    Get PDF
    Over the past 30 years functional magnetic resonance imaging (fMRI) has become a fundamental tool in cognitive neuroimaging studies. In particular, the emergence of restingstate fMRI has gained popularity in determining biomarkers of mental health disorders (Woodward & Cascio, 2015). Resting-state fMRI can be analysed using the functional connectivity matrix, an object that encodes the temporal correlation of blood activity within the brain. Functional connectivity matrices are symmetric positive definite (SPD) matrices, but common analysis methods either reduce the functional connectivity matrices to summary statistics or fail to account for the positive definite criteria. However, through the lens of Riemannian geometry functional connectivity matrices have an intrinsic non-linear shape that respects the positive definite criteria (the affine-invariant geometry (Pennec, Fillard, & Ayache, 2006)). With methods from Riemannian geometric statistics, we can begin to explore the shape of the functional brain to understand this non-linear structure and reduce data-loss in our analyses. This thesis o↵ers two novel methodological developments to the field of Riemannian geometric statistics inspired by methods used in fMRI research. First we propose geometric- MDMR, a generalisation of multivariate distance matrix regression (MDMR) (McArdle & Anderson, 2001) to Riemannian manifolds. Our second development is Riemannian partial least squares (R-PLS), the generalisation of the predictive modelling technique partial least squares (PLS) (H. Wold, 1975) to Riemannian manifolds. R-PLS extends geodesic regression (Fletcher, 2013) to manifold-valued response and predictor variables, similar to how PLS extends multiple linear regression. We also generalise the NIPALS algorithm to Riemannian manifolds and suggest a tangent space approximation as a proposed method to fit R-PLS. In addition to our methodological developments, this thesis o↵ers three more contributions to the literature. Firstly, we develop a novel simulation procedure to simulate realistic functional connectivity matrices through a combination of bootstrapping and the Wishart distribution. Second, we propose the R2S statistic for measuring subspace similarity using the theory of principal angles between subspaces. Finally, we propose an extension of the VIP statistic from PLS (S. Wold, Johansson, & Cocchi, 1993) to describe the relationship between individual predictors and response variables when predicting a multivariate response with PLS. All methods in this thesis are applied to two fMRI datasets: the COBRE dataset relating to schizophrenia, and the ABIDE dataset relating to Autism Spectrum Disorder (ASD). We show that geometric-MDMR can detect group-based di↵erences between ASD and neurotypical controls (NTC), unlike its Euclidean counterparts. We also demonstrate the efficacy of R-PLS through the detection of functional connections related to schizophrenia and ASD. These results are encouraging for the role of Riemannian geometric statistics in the future of neuroscientific research.Thesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 202

    Using XAI in the Clock Drawing Test to reveal the cognitive impairment pattern.

    Get PDF
    he prevalence of dementia is currently increasing worldwide. This syndrome produces a deteriorationin cognitive function that cannot be reverted. However, an early diagnosis can be crucial for slowing itsprogress. The Clock Drawing Test (CDT) is a widely used paper-and-pencil test for cognitive assessmentin which an individual has to manually draw a clock on a paper. There are a lot of scoring systems forthis test and most of them depend on the subjective assessment of the expert. This study proposes acomputer-aided diagnosis (CAD) system based on artificial intelligence (AI) methods to analyze the CDTand obtain an automatic diagnosis of cognitive impairment (CI). This system employs a preprocessingpipeline in which the clock is detected, centered and binarized to decrease the computational burden.Then, the resulting image is fed into a Convolutional Neural Network (CNN) to identify the informativepatterns within the CDT drawings that are relevant for the assessment of the patient’s cognitive status.Performance is evaluated in a real context where patients with CI and controls have been classified byclinical experts in a balanced sample size of 3282 drawings. The proposed method provides an accuracyof 75.65% in the binary case-control classification task, with an AUC of 0.83. These results are indeedrelevant considering the use of the classic version of the CDT. The large size of the sample suggests thatthe method proposed has a high reliability to be used in clinical contexts and demonstrates the suitabilityof CAD systems in the CDT assessment process. Explainable artificial intelligence (XAI) methods areapplied to identify the most relevant regions during classification. Finding these patterns is extremelyhelpful to understand the brain damage caused by CI. A validation method using resubstitution withupper bound correction in a machine learning approach is also discusseThis work was supported by the MCIN/ AEI/10.13039/501100011033/ and FEDER “Una manera de hacer Europa” under the RTI2018- 098913-B100 project, by the Consejeria de Economia, Innovacion, Ciencia y Empleo (Junta de An765 dalucia) and FEDER under CV20-45250, A-TIC080-UGR18, B-TIC-586-UGR20 and P20-00525 projects, and by the Ministerio de Universidades under the FPU18/04902 grant given to C. JimenezMesa and the Margarita-Salas grant to J.E. Arco
    corecore