97 research outputs found

    Exploring variability in medical imaging

    Get PDF
    Although recent successes of deep learning and novel machine learning techniques improved the perfor- mance of classification and (anomaly) detection in computer vision problems, the application of these methods in medical imaging pipeline remains a very challenging task. One of the main reasons for this is the amount of variability that is encountered and encapsulated in human anatomy and subsequently reflected in medical images. This fundamental factor impacts most stages in modern medical imaging processing pipelines. Variability of human anatomy makes it virtually impossible to build large datasets for each disease with labels and annotation for fully supervised machine learning. An efficient way to cope with this is to try and learn only from normal samples. Such data is much easier to collect. A case study of such an automatic anomaly detection system based on normative learning is presented in this work. We present a framework for detecting fetal cardiac anomalies during ultrasound screening using generative models, which are trained only utilising normal/healthy subjects. However, despite the significant improvement in automatic abnormality detection systems, clinical routine continues to rely exclusively on the contribution of overburdened medical experts to diagnosis and localise abnormalities. Integrating human expert knowledge into the medical imaging processing pipeline entails uncertainty which is mainly correlated with inter-observer variability. From the per- spective of building an automated medical imaging system, it is still an open issue, to what extent this kind of variability and the resulting uncertainty are introduced during the training of a model and how it affects the final performance of the task. Consequently, it is very important to explore the effect of inter-observer variability both, on the reliable estimation of model’s uncertainty, as well as on the model’s performance in a specific machine learning task. A thorough investigation of this issue is presented in this work by leveraging automated estimates for machine learning model uncertainty, inter-observer variability and segmentation task performance in lung CT scan images. Finally, a presentation of an overview of the existing anomaly detection methods in medical imaging was attempted. This state-of-the-art survey includes both conventional pattern recognition methods and deep learning based methods. It is one of the first literature surveys attempted in the specific research area.Open Acces

    Artificial intelligence techniques for studying neural functions in coma and sleep disorders

    Get PDF
    The use of artificial intelligence in computational neuroscience has increased within the last years. In the field of electroencephalography (EEG) research machine and deep learning, models show huge potential. EEG data is high dimensional, and complex models are well suited for their analysis. However, the use of artificial intelligence in EEG research and clinical applications is not yet established, and multiple challenges remain to be addressed. This thesis is focused on analyzing neurological EEG signals for clinical applications with artificial intelligence and is split into three sub-projects. The first project is a methodological contribution, presenting a proof of concept that deep learning on EEG signals can be used as a multivariate pattern analysis technique for research. Even though the field of deep learning for EEG has produced many publications, the use of these algorithms in research for the analysis of EEG signals is not established. Therefore for my first project, I developed an analysis pipeline based on a deep learning architecture, data augmentation techniques, and feature extraction method that is class and trial-specific. In summary, I present a novel multivariate pattern analysis pipeline for EEG data based on deep learning that can extract in a data-driven way trial-by-trial discriminant activity. In the second part of this thesis, I present a clinical application of predicting the outcome of comatose patients after cardiac arrest. Outcome prediction of patients in a coma is today still an open challenge, that depends on subjective clinical evaluations. Importantly, current clinical markers can leave up to a third of patients without a clear prognosis. To address this challenge, I trained a convolutional neural network on EEG signals of coma patients that were exposed to standardized auditory stimulations. This work showed a high predictive power of the trained deep learning model, also on patients that were without a established prognosis based on existing clinical criteria. These results emphasize the potential of deep learning models for predicting outcome of coma and assisting clinicians. In the last part of my thesis, I focused on sleep-wake disorders and studied whether unsupervised machine learning techniques could improve diagnosis. The field of sleep-wake disorders is convoluted, as they can cooccur within patients, and only a few disorders have clear diagnostic biomarkers. Thus I developed a pipeline based on an unsupervised clustering algorithm to disentangle the full landscape of sleep-wake disorders. First I reproduced previous results in a sub-cohort of patients with central disorders of hypersomnolence. The verified pipeline was then used on the full landscape of sleep-wake disorders, where I identified clear clusters of disorders with clear diagnostic biomarkers. My results call for new biomarkers, to improve patient phenotyping

    Machine Learning Methods with Noisy, Incomplete or Small Datasets

    Get PDF
    In many machine learning applications, available datasets are sometimes incomplete, noisy or affected by artifacts. In supervised scenarios, it could happen that label information has low quality, which might include unbalanced training sets, noisy labels and other problems. Moreover, in practice, it is very common that available data samples are not enough to derive useful supervised or unsupervised classifiers. All these issues are commonly referred to as the low-quality data problem. This book collects novel contributions on machine learning methods for low-quality datasets, to contribute to the dissemination of new ideas to solve this challenging problem, and to provide clear examples of application in real scenarios

    Automatic identification of ischemia using lightweight attention network in PET cardiac perfusion imaging

    Get PDF
    Ischemic disease, caused by inadequate blood supply to organs or tissues, poses a significant global health challenge. Early detection of ischemia is crucial for timely intervention and improved patient outcomes. Myocardial perfusion imaging with positron-emission tomography (PET) is a non-invasive technique used to identify ischemia. However, accurately interpreting PET images can be challenging, necessitating the development of reliable classification methods. In this study, we propose a novel approach using MS-DenseNet, a lightweight attention network, for the detection and classification of ischemia from myocardial polar maps. Our model incorporates the squeeze and excitation modules to emphasize relevant feature channels and suppress unnecessary ones. By effectively utilizing channel interdependencies, we achieve optimum reuse of interchannel interactions, enhancing the model's performance. To evaluate the efficacy and accuracy of our proposed model, we compare it with transfer learning models commonly used in medical image analysis. We conducted experiments using a dataset of 138 polar maps (JPEG) obtained from 15O_H2O stress perfusion studies, comprising patients with ischemic and non-ischemic condition. Our results demonstrate that MS-DenseNet outperforms the transfer learning models, highlighting its potential for accurate ischemia detection and classification. This research contributes to the field of ischemia diagnosis by introducing a lightweight attention network that effectively captures the relevant features from myocardial polar maps. The integration of the squeeze and excitation modules further enhances the model's discriminative capabilities. The proposed MS-DenseNet offers a promising solution for accurate and efficient ischemia detection, potentially improving the speed and accuracy of diagnosis and leading to better patient outcomes

    Non-communicable Diseases, Big Data and Artificial Intelligence

    Get PDF
    This reprint includes 15 articles in the field of non-communicable Diseases, big data, and artificial intelligence, overviewing the most recent advances in the field of AI and their application potential in 3P medicine

    Pacific Symposium on Biocomputing 2023

    Get PDF
    The Pacific Symposium on Biocomputing (PSB) 2023 is an international, multidisciplinary conference for the presentation and discussion of current research in the theory and application of computational methods in problems of biological significance. Presentations are rigorously peer reviewed and are published in an archival proceedings volume. PSB 2023 will be held on January 3-7, 2023 in Kohala Coast, Hawaii. Tutorials and workshops will be offered prior to the start of the conference.PSB 2023 will bring together top researchers from the US, the Asian Pacific nations, and around the world to exchange research results and address open issues in all aspects of computational biology. It is a forum for the presentation of work in databases, algorithms, interfaces, visualization, modeling, and other computational methods, as applied to biological problems, with emphasis on applications in data-rich areas of molecular biology.The PSB has been designed to be responsive to the need for critical mass in sub-disciplines within biocomputing. For that reason, it is the only meeting whose sessions are defined dynamically each year in response to specific proposals. PSB sessions are organized by leaders of research in biocomputing's 'hot topics.' In this way, the meeting provides an early forum for serious examination of emerging methods and approaches in this rapidly changing field

    Heterogeneidad tumoral en imágenes PET-CT

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Ciencias Físicas, Departamento de Estructura de la Materia, Física Térmica y Electrónica, leída el 28/01/2021Cancer is a leading cause of morbidity and mortality [1]. The most frequent cancers worldwide are non–small cell lung carcinoma (NSCLC) and breast cancer [2], being their management a challenging task [3]. Tumor diagnosis is usually made through biopsy [4]. However, medical imaging also plays an important role in diagnosis, staging, response to treatment, and recurrence assessment [5]. Tumor heterogeneity is recognized to be involved in cancer treatment failure, with worse clinical outcomes for highly heterogeneous tumors [6,7]. This leads to the existence of tumor sub-regions with different biological behavior (some more aggressive and treatment-resistant than others) [8-10]. Which are characterized by a different pattern of vascularization, vessel permeability, metabolism, cell proliferation, cell death, and other features, that can be measured by modern medical imaging techniques, including positron emission tomography/computed tomography (PET/CT) [10-12]. Thus, the assessment of tumor heterogeneity through medical images could allow the prediction of therapy response and long-term outcomes of patients with cancer [13]. PET/CT has become essential in oncology [14,15] and is usually evaluated through semiquantitative metabolic parameters, such as maximum/mean standard uptake value (SUVmax, SUVmean) or metabolic tumor volume (MTV), which are valuables as prognostic image-based biomarkers in several tumors [16-17], but these do not assess tumor heterogeneity. Likewise, fluorodeoxyglucose (18F-FDG) PET/CT is important to differentiate malignant from benign solitary pulmonary nodules (SPN), reducing so the number of patients who undergo unnecessary surgical biopsies. Several publications have shown that some quantitative image features, extracted from medical images, are suitable for diagnosis, tumor staging, the prognosis of treatment response, and long-term evolution of cancer patients [18-20]. The process of extracting and relating image features with clinical or biological variables is called “Radiomics” [9,20-24]. Radiomic parameters, such as textural features have been related directly to tumor heterogeneity [25]. This thesis investigated the relationships of the tumor heterogeneity, assessed by 18F-FDG-PET/CT texture analysis, with metabolic parameters and pathologic staging in patients with NSCLC, and explored the diagnostic performance of different metabolic, morphologic, and clinical criteria for classifying (malignant or not) of solitary pulmonary nodules (SPN). Furthermore, 18F-FDG-PET/CT radiomic features of patients with recurrent/metastatic breast cancer were used for constructing predictive models of response to the chemotherapy, based on an optimal combination of several feature selection and machine learning (ML) methods...El cáncer es una de las principales causas de morbilidad y mortalidad. Los más frecuentes son el carcinoma de pulmón de células no pequeñas (NSCLC) y el cáncer de mama, siendo su tratamiento un reto. El diagnóstico se suele realizar mediante biopsia. La heterogeneidad tumoral (HT) está implicada en el fracaso del tratamiento del cáncer, con peores resultados clínicos para tumores muy heterogéneos. Esta conduce a la existencia de subregiones tumorales con diferente comportamiento biológico (algunas más agresivas y resistentes al tratamiento); las cuales se caracterizan por diferentes patrones de vascularización, permeabilidad de los vasos sanguíneos, metabolismo, proliferación y muerte celular, que se pueden medir mediante imágenes médicas, incluida la tomografía por emisión de positrones/tomografía computarizada con fluorodesoxiglucosa (18F-FDG-PET/CT). La evaluación de la HT a través de imágenes médicas, podría mejorar la predicción de la respuesta al tratamiento y de los resultados a largo plazo, en pacientes con cáncer. La 18F-FDG-PET/CT es esencial en oncología, generalmente se evalúa con parámetros metabólicos semicuantitativos, como el valor de captación estándar máximo/medio (SUVmáx, SUVmedio) o el volumen tumoral metabólico (MTV), que tienen un gran valor pronóstico en varios tumores, pero no evalúan la HT. Asimismo, es importante para diferenciar los nódulos pulmonares solitarios (NPS) malignos de los benignos, reduciendo el número de pacientes que van a biopsias quirúrgicas innecesarias. Publicaciones recientes muestran que algunas características cuantitativas, extraídas de las imágenes médicas, son robustas para diagnóstico, estadificación, pronóstico de la respuesta al tratamiento y la evolución, de pacientes con cáncer. El proceso de extraer y relacionar estas características con variables clínicas o biológicas se denomina “Radiomica”. Algunos parámetros radiómicos, como la textura, se han relacionado directamente con la HT. Esta tesis investigó las relaciones entre HT, evaluada mediante análisis de textura (AT) de imágenes 18F-FDG-PET/CT, con parámetros metabólicos y estadificación patológica en pacientes con NSCLC, y exploró el rendimiento diagnóstico de diferentes criterios metabólicos, morfológicos y clínicos para la clasificación de NPS. Además, se usaron características radiómicas de imágenes 18F-FDG-PET/CT de pacientes con cáncer de mama recurrente/metastásico, para construir modelos predictivos de la respuesta a la quimioterapia, combinándose varios métodos de selección de características y aprendizaje automático (ML)...Fac. de Ciencias FísicasTRUEunpu

    Computational Image Analysis For Axonal Transport, Phenotypic Profiling, And Digital Pathology

    Get PDF
    Recent advances in fluorescent probes, microscopy, and imaging platforms have revolutionized biology and medicine, generating multi-dimensional image datasets at unprecedented scales. Traditional, low-throughput methods of image analysis are inadequate to handle the increased “volume, velocity, and variety” that characterize the realm of big data. Thus, biomedical imaging requires a new set of tools, which include advanced computer vision and machine learning algorithms. In this work, we develop computational image analysis solutions to biological questions at the level of single-molecules, cells, and tissues. At the molecular level, we dissect the regulation of dynein-dynactin transport initiation using in vitro reconstitution, single-particle tracking, super-resolution microscopy, live-cell imaging in neurons, and computational modeling. We show that at least two mechanisms regulate dynein transport initiation neurons: (1) cytoplasmic linker proteins, which are regulated by phosphorylation, increase the capture radius around the microtubule, thus reducing the time cargo spends in a diffusive search; and (2) a spatial gradient of tyrosinated alpha-tubulin enriched in the distal axon increases the affinity of dynein-dynactin for microtubules. Together, these mechanisms support a multi-modal recruitment model where interacting layers of regulation provide efficient, robust, and spatiotemporal control of transport initiation. At the cellular level, we develop and train deep residual convolutional neural networks on a large and diverse set of cellular microscopy images. Then, we apply networks trained for one task as deep feature extractors for unsupervised phenotypic profiling in a different task. We show that neural networks trained on one dataset encode robust image phenotypes that are sufficient to cluster subcellular structures by type and separate drug compounds by the mechanism of action, without additional training, supporting the strength and flexibility of this approach. Future applications include phenotypic profiling in image-based screens, where clustering genetic or drug treatments by image phenotypes may reveal novel relationships among genetic or pharmacologic pathways. Finally, at the tissue level, we apply deep learning pipelines in digital pathology to segment cardiac tissue and classify clinical heart failure using whole-slide images of cardiac histopathology. Together, these results demonstrate the power and promise of computational image analysis, computer vision, and deep learning in biological image analysis
    corecore