25 research outputs found

    Deep Learning Approaches Applied to Image Classification of Renal Tumors: A Systematic Review

    Get PDF
    Renal cancer is one of the ten most common cancers in the population that affects 65,000 new patients a year. Nowadays, to predict pathologies or classify tumors, deep learning (DL) methods are effective in addition to extracting high-performance features and dealing with segmentation tasks. This review has focused on the different studies related to the application of DL techniques for the detection or segmentation of renal tumors in patients. From the bibliographic search carried out, a total of 33 records were identified in Scopus, PubMed and Web of Science. The results derived from the systematic review give a detailed description of the research objectives, the types of images used for analysis, the data sets used, whether the database used is public or private, and the number of patients involved in the studies. The first paper where DL is applied compared to other types of tumors was in 2019 which is relatively recent. Public collection and sharing of data sets are of utmost importance to increase research in this field as many studies use private databases. We can conclude that future research will identify many benefits, such as unnecessary incisions for patients and more accurate diagnoses. As research in this field grows, the amount of open data is expected to increase.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This article is based upon work from COST Action HARMONISATION (CA20122). This research has been partially funded by the Spanish Government by the project PID2021-127275OB-I00, FEDER “Una manera de hacer Europa”

    Automatic left ventricle volume calculation with explainability through a deep learning weak-supervision methodology

    Full text link
    [EN] Background and objective: Magnetic resonance imaging is the most reliable imaging technique to assess the heart. More specifically there is great importance in the analysis of the left ventricle, as the main pathologies directly affect this region. In order to characterize the left ventricle, it is necessary to extract its volume. In this work we present a neural network architecture that is capable of directly estimating the left ventricle volume in short axis cine Magnetic Resonance Imaging in the end-diastolic frame and provide a segmentation of the region which is the basis of the volume calculation, thus offering explain-ability to the estimated value. Methods: The network was designed to directly target the volumes to estimate, not requiring any labeled segmentation on the images. The network was based on a 3D U-net with extra layers defined in a scan-ning module that learned features like the circularity of the objects and the volumes to estimate in a weakly-supervised manner. The only targets defined were the left ventricle volumes and the circularity of the object detected through the estimation of the pi value derived from its shape. We had access to 397 cases corresponding to 397 different subjects. We randomly selected 98 cases to use as test set. Results: The results show a good match between the real and estimated volumes in the test set, with a mean relative error of 8% and a mean absolute error of 9.12 ml with a Pearson correlation coefficient of 0.95. The derived segmentations obtained by the network achieved Dice coefficients with a mean value of 0.79. Conclusions: The proposed method is capable of obtaining the left ventricle volume biomarker in the end-diastole and offer an explanation of how it obtains the result in the form of a segmentation mask without the need of segmentation labels to train the algorithm, making it a potentially more trustworthy method for clinicians and a way to train neural networks more easily when segmentation labels are not readily available.The authors acknowledge financial support from the Consel-leria d'Educacio, Investigacio, Cultura i Esport, Generalitat Valenciana (grants AEST/2019/037 and AEST/2020/029) , from the Agencia Valenciana de la Innovacion, Generalitat Valenciana (ref. INNCAD00/19/085) , and from the Centro para el Desarrollo Tecnologico Industrial (Programa Eurostars2, actuacion Interempresas Internacional) , Spanish Ministerio de Ciencia, Innovacion y Universidades (ref. CIIP-20192020) .PĂ©rez-PelegrĂ­, M.; Monmeneu, JV.; LĂłpez-Lereu, MP.; PĂ©rez-PelegrĂ­, L.; Maceira, AM.; Bodi, V.; Moratal, D. (2021). Automatic left ventricle volume calculation with explainability through a deep learning weak-supervision methodology. Computer Methods and Programs in Biomedicine. 208:1-8. https://doi.org/10.1016/j.cmpb.2021.106275S1820

    Deep Learning Approaches Applied to Image Classification of Renal Tumors: A Systematic Review

    Get PDF
    Renal cancer is one of the ten most common cancers in the population that affects 65,000 new patients a year. Nowadays, to predict pathologies or classify tumors, deep learning (DL) methods are effective in addition to extracting high-performance features and dealing with segmentation tasks. This review has focused on the different studies related to the application of DL techniques for the detection or segmentation of renal tumors in patients. From the bibliographic search carried out, a total of 33 records were identified in Scopus, PubMed and Web of Science. The results derived from the systematic review give a detailed description of the research objectives, the types of images used for analysis, the data sets used, whether the database used is public or private, and the number of patients involved in the studies. The first paper where DL is applied compared to other types of tumors was in 2019 which is relatively recent. Public collection and sharing of data sets are of utmost importance to increase research in this field as many studies use private databases. We can conclude that future research will identify many benefits, such as unnecessary incisions for patients and more accurate diagnoses. As research in this field grows, the amount of open data is expected to increase

    Semi-Weakly Supervised Learning for Label-efficient Semantic Segmentation in Expert-driven Domains

    Get PDF
    Unter Zuhilfenahme von Deep Learning haben semantische Segmentierungssysteme beeindruckende Ergebnisse erzielt, allerdings auf der Grundlage von ĂŒberwachtem Lernen, das durch die VerfĂŒgbarkeit kostspieliger, pixelweise annotierter Bilder limitiert ist. Bei der Untersuchung der Performance dieser Segmentierungssysteme in Kontexten, in denen kaum Annotationen vorhanden sind, bleiben sie hinter den hohen Erwartungen, die durch die Performance in annotationsreichen Szenarien geschĂŒrt werden, zurĂŒck. Dieses Dilemma wiegt besonders schwer, wenn die Annotationen von lange geschultem Personal, z.B. Medizinern, Prozessexperten oder Wissenschaftlern, erstellt werden mĂŒssen. Um gut funktionierende Segmentierungsmodelle in diese annotationsarmen, Experten-angetriebenen DomĂ€nen zu bringen, sind neue Lösungen nötig. Zu diesem Zweck untersuchen wir zunĂ€chst, wie schlecht aktuelle Segmentierungsmodelle mit extrem annotationsarmen Szenarien in Experten-angetriebenen BildgebungsdomĂ€nen zurechtkommen. Daran schließt sich direkt die Frage an, ob die kostspielige pixelweise Annotation, mit der Segmentierungsmodelle in der Regel trainiert werden, gĂ€nzlich umgangen werden kann, oder ob sie umgekehrt ein Kosten-effektiver Anstoß sein kann, um die Segmentierung in Gang zu bringen, wenn sie sparsam eingestetzt wird. Danach gehen wir auf die Frage ein, ob verschiedene Arten von Annotationen, schwache- und pixelweise Annotationen mit unterschiedlich hohen Kosten, gemeinsam genutzt werden können, um den Annotationsprozess flexibler zu gestalten. Experten-angetriebene DomĂ€nen haben oft nicht nur einen Annotationsmangel, sondern auch völlig andere Bildeigenschaften, beispielsweise volumetrische Bild-Daten. Der Übergang von der 2D- zur 3D-semantischen Segmentierung fĂŒhrt zu voxelweisen Annotationsprozessen, was den nötigen Zeitaufwand fĂŒr die Annotierung mit der zusĂ€tzlichen Dimension multipliziert. Um zu einer handlicheren Annotation zu gelangen, untersuchen wir Trainingsstrategien fĂŒr Segmentierungsmodelle, die nur preiswertere, partielle Annotationen oder rohe, nicht annotierte Volumina benötigen. Dieser Wechsel in der Art der Überwachung im Training macht die Anwendung der Volumensegmentierung in Experten-angetriebenen DomĂ€nen realistischer, da die Annotationskosten drastisch gesenkt werden und die Annotatoren von Volumina-Annotationen befreit werden, welche naturgemĂ€ĂŸ auch eine Menge visuell redundanter Regionen enthalten wĂŒrden. Schließlich stellen wir die Frage, ob es möglich ist, die Annotations-Experten von der strikten Anforderung zu befreien, einen einzigen, spezifischen Annotationstyp liefern zu mĂŒssen, und eine Trainingsstrategie zu entwickeln, die mit einer breiten Vielfalt semantischer Information funktioniert. Eine solche Methode wurde hierzu entwickelt und in unserer umfangreichen experimentellen Evaluierung kommen interessante Eigenschaften verschiedener Annotationstypen-Mixe in Bezug auf deren Segmentierungsperformance ans Licht. Unsere Untersuchungen fĂŒhrten zu neuen Forschungsrichtungen in der semi-weakly ĂŒberwachten Segmentierung, zu neuartigen, annotationseffizienteren Methoden und Trainingsstrategien sowie zu experimentellen Erkenntnissen, zur Verbesserung von Annotationsprozessen, indem diese annotationseffizient, expertenzentriert und flexibel gestaltet werden

    Label Efficient Deep Learning in Medical Imaging

    Get PDF
    Recent state-of-the-art deep learning frameworks require large, fully annotated training datasets that are, depending on the objective, time-consuming to generate. While in most fields, these labelling tasks can be parallelized massively or even outsourced, this is not the case for medical images. Usually, only a highly trained expert is able to generate these datasets. However, since additional manual annotation, especially for the purpose of segmentation or tracking, is typically not part of a radiologist's workflow, large and fully annotated datasets are a rare and scarce good. In this context, a variety of frameworks are proposed in this work to solve the problems that arise due to the lack of annotated training data across different medical imaging tasks and modalities. The first contribution as part of this thesis was to investigate weakly supervised learning on PET/CT data for the task of lesion segmentation. Using only class labels (tumor vs. no tumor), a classifier was first trained and subsequently used to generate Class Activation Maps highlighting regions with lesions. Based on these region proposals, final tumor segmentation could be performed with high accuracy in clinically relevant metrics. This drastically simplifies the process of training data generation, as only class labels have to be assigned to each slice of a scan instead of a full pixel-wise segmentation. To further reduce the time required to prepare training data, two self-supervised methods were investigated for the task of anatomical tissue segmentation and landmark detection. To this end, as a second contribution, a state-of-the-art tracking framework based on contrastive random walks was transferred, adapted and extended to the medical imaging domain. As contrastive learning often lacks real-time capability, a self-supervised template matching network was developed to address the task of real-time anatomical tissue tracking, yielding the third contribution of this work. Both of these methods have in common that only during inference the object or region of interest is defined, reducing the number of required labels to as few as one and allowing adaptation to different tasks without having to re-train or access the original training data. Despite the limited amount of labelled data, good results could be achieved for both tracking of organs across subjects as well as tissue tracking within time-series. State-of-the-art self-supervised learning in medical imaging is usually performed on 2D slices due to the lack of training data and limited computational resources. To exploit the three-dimensional structure of this type of data, self-supervised contrastive learning was performed on entire volumes using over 40,000 whole-body MRI scans forming the fourth contribution. Due to this pre-training, a large number of downstream tasks could be successfully addressed using only limited labelled data. Furthermore, the learned representations allows to visualize the entire dataset in a two-dimensional view. To encourage research in the field of automated lesion segmentation in PET/CT image data, the autoPET challenge was organized, which represents the fifth contribution

    Reliability-based cleaning of noisy training labels with inductive conformal prediction in multi-modal biomedical data mining

    Full text link
    Accurately labeling biomedical data presents a challenge. Traditional semi-supervised learning methods often under-utilize available unlabeled data. To address this, we propose a novel reliability-based training data cleaning method employing inductive conformal prediction (ICP). This method capitalizes on a small set of accurately labeled training data and leverages ICP-calculated reliability metrics to rectify mislabeled data and outliers within vast quantities of noisy training data. The efficacy of the method is validated across three classification tasks within distinct modalities: filtering drug-induced-liver-injury (DILI) literature with title and abstract, predicting ICU admission of COVID-19 patients through CT radiomics and electronic health records, and subtyping breast cancer using RNA-sequencing data. Varying levels of noise to the training labels were introduced through label permutation. Results show significant enhancements in classification performance: accuracy enhancement in 86 out of 96 DILI experiments (up to 11.4%), AUROC and AUPRC enhancements in all 48 COVID-19 experiments (up to 23.8% and 69.8%), and accuracy and macro-average F1 score improvements in 47 out of 48 RNA-sequencing experiments (up to 74.6% and 89.0%). Our method offers the potential to substantially boost classification performance in multi-modal biomedical machine learning tasks. Importantly, it accomplishes this without necessitating an excessive volume of meticulously curated training data

    Vascular Implications of COVID-19: Role of Radiological Imaging, Artificial Intelligence, and Tissue Characterization: A Special Report

    Get PDF
    The SARS-CoV-2 virus has caused a pandemic, infecting nearly 80 million people worldwide, with mortality exceeding six million. The average survival span is just 14 days from the time the symptoms become aggressive. The present study delineates the deep-driven vascular damage in the pulmonary, renal, coronary, and carotid vessels due to SARS-CoV-2. This special report addresses an important gap in the literature in understanding (i) the pathophysiology of vascular damage and the role of medical imaging in the visualization of the damage caused by SARS-CoV-2, and (ii) further understanding the severity of COVID-19 using artificial intelligence (AI)-based tissue characterization (TC). PRISMA was used to select 296 studies for AI-based TC. Radiological imaging techniques such as magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound were selected for imaging of the vasculature infected by COVID-19. Four kinds of hypotheses are presented for showing the vascular damage in radiological images due to COVID-19. Three kinds of AI models, namely, machine learning, deep learning, and transfer learning, are used for TC. Further, the study presents recommendations for improving AI-based architectures for vascular studies. We conclude that the process of vascular damage due to COVID-19 has similarities across vessel types, even though it results in multi-organ dysfunction. Although the mortality rate is ~2% of those infected, the long-term effect of COVID-19 needs monitoring to avoid deaths. AI seems to be penetrating the health care industry at warp speed, and we expect to see an emerging role in patient care, reduce the mortality and morbidity rate

    Vascular Implications of COVID-19: Role of Radiological Imaging, Artificial Intelligence, and Tissue Characterization: A Special Report

    Get PDF
    The SARS-CoV-2 virus has caused a pandemic, infecting nearly 80 million people worldwide, with mortality exceeding six million. The average survival span is just 14 days from the time the symptoms become aggressive. The present study delineates the deep-driven vascular damage in the pulmonary, renal, coronary, and carotid vessels due to SARS-CoV-2. This special report addresses an important gap in the literature in understanding (i) the pathophysiology of vascular damage and the role of medical imaging in the visualization of the damage caused by SARS-CoV-2, and (ii) further understanding the severity of COVID-19 using artificial intelligence (AI)-based tissue characterization (TC). PRISMA was used to select 296 studies for AI-based TC. Radiological imaging techniques such as magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound were selected for imaging of the vasculature infected by COVID-19. Four kinds of hypotheses are presented for showing the vascular damage in radiological images due to COVID-19. Three kinds of AI models, namely, machine learning, deep learning, and transfer learning, are used for TC. Further, the study presents recommendations for improving AI-based architectures for vascular studies. We conclude that the process of vascular damage due to COVID-19 has similarities across vessel types, even though it results in multi-organ dysfunction. Although the mortality rate is ~2% of those infected, the long-term effect of COVID-19 needs monitoring to avoid deaths. AI seems to be penetrating the health care industry at warp speed, and we expect to see an emerging role in patient care, reduce the mortality and morbidity rate

    3D shape instantiation for intra-operative navigation from a single 2D projection

    Get PDF
    Unlike traditional open surgery where surgeons can see the operation area clearly, in robot-assisted Minimally Invasive Surgery (MIS), a surgeon’s view of the region of interest is usually limited. Currently, 2D images from fluoroscopy, Magnetic Resonance Imaging (MRI), endoscopy or ultrasound are used for intra-operative guidance as real-time 3D volumetric acquisition is not always possible due to the acquisition speed or exposure constraints. 3D reconstruction, however, is key to navigation in complex in vivo geometries and can help resolve this issue. Novel 3D shape instantiation schemes are developed in this thesis, which can reconstruct the high-resolution 3D shape of a target from limited 2D views, especially a single 2D projection or slice. To achieve a complete and automatic 3D shape instantiation pipeline, segmentation schemes based on deep learning are also investigated. These include normalization schemes for training U-Nets and network architecture design of Atrous Convolutional Neural Networks (ACNNs). For U-Net normalization, four popular normalization methods are reviewed, then Instance-Layer Normalization (ILN) is proposed. It uses a sigmoid function to linearly weight the feature map after instance normalization and layer normalization, and cascades group normalization after the weighted feature map. Detailed validation results potentially demonstrate the practical advantages of the proposed ILN for effective and robust segmentation of different anatomies. For network architecture design in training Deep Convolutional Neural Networks (DCNNs), the newly proposed ACNN is compared to traditional U-Net where max-pooling and deconvolutional layers are essential. Only convolutional layers are used in the proposed ACNN with different atrous rates and it has been shown that the method is able to provide a fully-covered receptive field with a minimum number of atrous convolutional layers. ACNN enhances the robustness and generalizability of the analysis scheme by cascading multiple atrous blocks. Validation results have shown the proposed method achieves comparable results to the U-Net in terms of medical image segmentation, whilst reducing the trainable parameters, thus improving the convergence and real-time instantiation speed. For 3D shape instantiation of soft and deforming organs during MIS, Sparse Principle Component Analysis (SPCA) has been used to analyse a 3D Statistical Shape Model (SSM) and to determine the most informative scan plane. Synchronized 2D images are then scanned at the most informative scan plane and are expressed in a 2D SSM. Kernel Partial Least Square Regression (KPLSR) has been applied to learn the relationship between the 2D and 3D SSM. It has been shown that the KPLSR-learned model developed in this thesis is able to predict the intra-operative 3D target shape from a single 2D projection or slice, thus permitting real-time 3D navigation. Validation results have shown the intrinsic accuracy achieved and the potential clinical value of the technique. The proposed 3D shape instantiation scheme is further applied to intra-operative stent graft deployment for the robot-assisted treatment of aortic aneurysms. Mathematical modelling is first used to simulate the stent graft characteristics. This is then followed by the Robust Perspective-n-Point (RPnP) method to instantiate the 3D pose of fiducial markers of the graft. Here, Equally-weighted Focal U-Net is proposed with a cross-entropy and an additional focal loss function. Detailed validation has been performed on patient-specific stent grafts with an accuracy between 1-3mm. Finally, the relative merits and potential pitfalls of all the methods developed in this thesis are discussed, followed by potential future research directions and additional challenges that need to be tackled.Open Acces
    corecore