2,465 research outputs found

    Spectral Data Augmentation Techniques to quantify Lung Pathology from CT-images

    Full text link
    Data augmentation is of paramount importance in biomedical image processing tasks, characterized by inadequate amounts of labelled data, to best use all of the data that is present. In-use techniques range from intensity transformations and elastic deformations, to linearly combining existing data points to make new ones. In this work, we propose the use of spectral techniques for data augmentation, using the discrete cosine and wavelet transforms. We empirically evaluate our approaches on a CT texture analysis task to detect abnormal lung-tissue in patients with cystic fibrosis. Empirical experiments show that the proposed spectral methods perform favourably as compared to the existing methods. When used in combination with existing methods, our proposed approach can increase the relative minor class segmentation performance by 44.1% over a simple replication baseline.Comment: 5 pages including references, accepted as Oral presentation at IEEE ISBI 202

    Longitudinal Quantitative Assessment of COVID-19 Infection Progression from Chest CTs

    Full text link
    Chest computed tomography (CT) has played an essential diagnostic role in assessing patients with COVID-19 by showing disease-specific image features such as ground-glass opacity and consolidation. Image segmentation methods have proven to help quantify the disease burden and even help predict the outcome. The availability of longitudinal CT series may also result in an efficient and effective method to reliably assess the progression of COVID-19, monitor the healing process and the response to different therapeutic strategies. In this paper, we propose a new framework to identify infection at a voxel level (identification of healthy lung, consolidation, and ground-glass opacity) and visualize the progression of COVID-19 using sequential low-dose non-contrast CT scans. In particular, we devise a longitudinal segmentation network that utilizes the reference scan information to improve the performance of disease identification. Experimental results on a clinical longitudinal dataset collected in our institution show the effectiveness of the proposed method compared to the static deep neural networks for disease quantification.Comment: MICCAI 202

    A Bayesian Nonparametric model for textural pattern heterogeneity

    Full text link
    Cancer radiomics is an emerging discipline promising to elucidate lesion phenotypes and tumor heterogeneity through patterns of enhancement, texture, morphology, and shape. The prevailing technique for image texture analysis relies on the construction and synthesis of Gray-Level Co-occurrence Matrices (GLCM). Practice currently reduces the structured count data of a GLCM to reductive and redundant summary statistics for which analysis requires variable selection and multiple comparisons for each application, thus limiting reproducibility. In this article, we develop a Bayesian multivariate probabilistic framework for the analysis and unsupervised clustering of a sample of GLCM objects. By appropriately accounting for skewness and zero-inflation of the observed counts and simultaneously adjusting for existing spatial autocorrelation at nearby cells, the methodology facilitates estimation of texture pattern distributions within the GLCM lattice itself. The techniques are applied to cluster images of adrenal lesions obtained from CT scans with and without administration of contrast. We further assess whether the resultant subtypes are clinically oriented by investigating their correspondence with pathological diagnoses. Additionally, we compare performance to a class of machine-learning approaches currently used in cancer radiomics with simulation studies.Comment: 45 pages, 7 figures, 1 Tabl

    Exploring variability in medical imaging

    Get PDF
    Although recent successes of deep learning and novel machine learning techniques improved the perfor- mance of classification and (anomaly) detection in computer vision problems, the application of these methods in medical imaging pipeline remains a very challenging task. One of the main reasons for this is the amount of variability that is encountered and encapsulated in human anatomy and subsequently reflected in medical images. This fundamental factor impacts most stages in modern medical imaging processing pipelines. Variability of human anatomy makes it virtually impossible to build large datasets for each disease with labels and annotation for fully supervised machine learning. An efficient way to cope with this is to try and learn only from normal samples. Such data is much easier to collect. A case study of such an automatic anomaly detection system based on normative learning is presented in this work. We present a framework for detecting fetal cardiac anomalies during ultrasound screening using generative models, which are trained only utilising normal/healthy subjects. However, despite the significant improvement in automatic abnormality detection systems, clinical routine continues to rely exclusively on the contribution of overburdened medical experts to diagnosis and localise abnormalities. Integrating human expert knowledge into the medical imaging processing pipeline entails uncertainty which is mainly correlated with inter-observer variability. From the per- spective of building an automated medical imaging system, it is still an open issue, to what extent this kind of variability and the resulting uncertainty are introduced during the training of a model and how it affects the final performance of the task. Consequently, it is very important to explore the effect of inter-observer variability both, on the reliable estimation of model’s uncertainty, as well as on the model’s performance in a specific machine learning task. A thorough investigation of this issue is presented in this work by leveraging automated estimates for machine learning model uncertainty, inter-observer variability and segmentation task performance in lung CT scan images. Finally, a presentation of an overview of the existing anomaly detection methods in medical imaging was attempted. This state-of-the-art survey includes both conventional pattern recognition methods and deep learning based methods. It is one of the first literature surveys attempted in the specific research area.Open Acces

    Machine Learning/Deep Learning in Medical Image Processing

    Get PDF
    Many recent studies on medical image processing have involved the use of machine learning (ML) and deep learning (DL). This special issue, “Machine Learning/Deep Learning in Medical Image Processing”, has been launched to provide an opportunity for researchers in the area of medical image processing to highlight recent developments made in their fields with ML/DL. Seven excellent papers that cover a wide variety of medical/clinical aspects are selected in this special issue

    Leveraging Supervoxels for Medical Image Volume Segmentation With Limited Supervision

    Get PDF
    The majority of existing methods for machine learning-based medical image segmentation are supervised models that require large amounts of fully annotated images. These types of datasets are typically not available in the medical domain and are difficult and expensive to generate. A wide-spread use of machine learning based models for medical image segmentation therefore requires the development of data-efficient algorithms that only require limited supervision. To address these challenges, this thesis presents new machine learning methodology for unsupervised lung tumor segmentation and few-shot learning based organ segmentation. When working in the limited supervision paradigm, exploiting the available information in the data is key. The methodology developed in this thesis leverages automatically generated supervoxels in various ways to exploit the structural information in the images. The work on unsupervised tumor segmentation explores the opportunity of performing clustering on a population-level in order to provide the algorithm with as much information as possible. To facilitate this population-level across-patient clustering, supervoxel representations are exploited to reduce the number of samples, and thereby the computational cost. In the work on few-shot learning-based organ segmentation, supervoxels are used to generate pseudo-labels for self-supervised training. Further, to obtain a model that is robust to the typically large and inhomogeneous background class, a novel anomaly detection-inspired classifier is proposed to ease the modelling of the background. To encourage the resulting segmentation maps to respect edges defined in the input space, a supervoxel-informed feature refinement module is proposed to refine the embedded feature vectors during inference. Finally, to improve trustworthiness, an architecture-agnostic mechanism to estimate model uncertainty in few-shot segmentation is developed. Results demonstrate that supervoxels are versatile tools for leveraging structural information in medical data when training segmentation models with limited supervision

    Implementation of a 3D CNN for COPD classification

    Get PDF
    Segons les prediccions de la Organització Mundial de la Salut (OMS) pels voltants del 2030 la Malaltia Pulmonar Obstructiva Crònica (MPOC) es convertirá en la tercera causa de mort en tot el món. L’MPOC és una patologia que afecta a les vies respiratòries i als pulmons. Avui en dia esdevé crónica i incurable però, és una malaltia tractable i prevenible. Fins ara les proves de diagnòstic usades per a detectar l’MPOC es basen en l’espirometria, aquesta prova, tot i indicar el grau d’obstrucció al pas de l’aire que es produeix en els pulmons, sovint no és molt fiable. És per aquest motiu que s’estan començant a usar tècniques basades en algorismes de Deep Learning per a la classificaió més acurada d’aquesta patologia, basant-se en imatges tomogràfiques de pacients malalts d’MPOC. Les xarxes neuronals convolucionals en tres dimensions (3D-CNN) en són un exemple. A partir de les dades i les imatges obtingudes en l’estudi observacional d’ECLIPSE proporcionades per l’equip de recerca de BRGE de ISGlobal, s’implementa una 3D-CNN per a la classificació de pacients amb risc d’MPOC. Aquest treball té com a objectiu desenvolupar una recerca extensa sobre la recerca actual en aquest àmbit i proposa millores per a l’optimització i reducció del cost computacional d’una 3D-CNN per aquest cas d’estudi concret.Según las predicciones de la Organización Mundial de la Salud (OMS), para alrededor del 2030, la Enfermedad Pulmonar Obstructiva Crónica (EPOC) se convertirá en la tercera causa de muerte en todo el mundo. La EPOC es una enfermedad que afecta las vías respiratorias y los pulmones. En la actualidad, se considera crónica e incurable, pero es una enfermedad tratable y prevenible. Hasta ahora, las pruebas de diagnóstico utilizadas para detectar la EPOC se basan en la espirometría. Esta prueba, a pesar de indicar el grado de obstrucción en el flujo de aire que ocurre en los pulmones, a menudo no es muy confiable. Es por esta razón que se están empezando a utilizar técnicas basadas en algoritmos de Deep Learning para una clasificación más precisa de esta patología, utilizando imágenes tomográficas de pacientes enfermos de EPOC. Las redes neuronales convolucionales en tres dimensiones (3D-CNN) son un ejemplo de esto. A partir de los datos y las imágenes obtenidas en el estudio observacional ECLIPSE proporcionado por el equipo de investigación de BRGE de ISGlobal, se implementa una 3D-CNN para la clasificación de pacientes con riesgo de EPOC. Este trabajo tiene como objetivo desarrollar una investigación exhaustiva sobre la investigación actual en este campo y propone mejoras para la optimización y reducción del costo computacional de una 3D-CNN para este caso de estudio concreto.According to predictions by the World Health Organization (WHO), by around 2030, Chronic Obstructive Pulmonary Disease (COPD) will become the third leading cause of death worldwide. COPD is a condition that affects the respiratory tract and lungs. Currently, it is considered chronic and incurable, but it is a treatable and preventable disease. Up to now, diagnostic tests used to detect COPD have been based on spirometry. Despite indicating the degree of airflow obstruction in the lungs, this test is often not very reliable. That is why techniques based on Deep Learning algorithms are being increasingly used for more accurate classification of this pathology, based on tomographic images of COPD patients. Three-dimensional Convolutional Neural Networks (3D-CNN) are an example of such techniques. Based on the data and images obtained in the observational study called ECLIPSE, provided by the research team at BRGE of ISGlobal, a 3D-CNN is implemented for the classification of patients at risk of COPD. This work aims to conduct extensive research on the current state of research in this field and proposes improvements for the optimization and reduction of the computational cost of a 3D-CNN for this specific case study
    corecore