11 research outputs found

    Few Shot Learning in Histopathological Images:Reducing the Need of Labeled Data on Biological Datasets

    Get PDF
    Although deep learning pathology diagnostic algorithms are proving comparable results with human experts in a wide variety of tasks, they still require a huge amount of well annotated data for training. Generating such extensive and well labelled datasets is time consuming and is not feasible for certain tasks and so, most of the medical datasets available are scarce in images and therefore, not enough for training. In this work we validate that the use of few shot learning techniques can transfer knowledge from a well defined source domain from Colon tissue into a more generic domain composed by Colon, Lung and Breast tissue by using very few training images. Our results show that our few-shot approach is able to obtain a balanced accuracy (BAC) of 90% with just 60 training images, even for the Lung and Breast tissues that were not present on the training set. This outperforms the finetune transfer learning approach that obtains 73% BAC with 60 images and requires 600 images to get up to 81% BAC.This study has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 732111 (PICCOLO project)

    Autofluorescence image reconstruction and virtual staining for in-vivo optical biopsying

    Get PDF
    Modern photonic technologies are emerging, allowing the acquisition of in-vivo endoscopic tissue imaging at a microscopic scale, with characteristics comparable to traditional histological slides, and with a label-free modality. This raises the possibility of an ‘optical biopsy’ to aid clinical decision making. This approach faces barriers for being incorporated into clinical practice, including the lack of existing images for training, unfamiliarity of clinicians with the novel image domains and the uncertainty of trusting ‘black-box’ machine learned image analysis, where the decision making remains inscrutable. In this paper, we propose a new method to transform images from novel photonics techniques (e.g. autofluorescence microscopy) into already established domains such as Hematoxilyn-Eosin (H-E) microscopy through virtual reconstruction and staining. We introduce three main innovations: 1) we propose a transformation method based on a Siamese structure that simultaneously learns the direct and inverse transformation ensuring domain back-transformation quality of the transformed data. 2) We also introduced an embedding loss term that ensures similarity not only at pixel level, but also at the image embedding description level. This drastically reduces the perception distortion trade-off problem existing in common domain transfer based on generative adversarial networks. These virtually stained images can serve as reference standard images for comparison with the already known H-E images. 3) We also incorporate an uncertainty margin concept that allows the network to measure its own confidence, and demonstrate that these reconstructed and virtually stained images can be used on previously-studied classification models of H-E images that have been computationally degraded and de-stained. The three proposed methods can be seamlessly incorporated on any existing architectures. We obtained balanced accuracies of 0.95 and negative predictive values of 1.00 over the reconstructed and virtually stained image-set on the detection of color-rectal tumoral tissue. This is of great importance as we reduce the need for extensive labeled datasets for training, which are normally not available on the early studies of a new imaging technology.The authors would like to thank all pathologists that generated the BIOPOOL dataset (FP7-ICT-296162) that has been used for this work and specially to M. Saiz, A. Gaafar, S. Fernandez, A. Saiz, E. de Miguel, B. Catón, J. J. Aguirre, R. Ruiz, Ma A. Viguri, and R. Rezola

    Facial nerve palsy following parotid gland surgery : A machine learning prediction outcome approach

    Get PDF
    Machine learning (ML)-based facial nerve injury (FNI) forecasting grounded on multicentric data has not been released up to now. Three distinct ML models, random forest (RF), K-nearest neighbor, and artificial neural network (ANN), for the prediction of FNI were evaluated in this mode. A retrospective, longitudinal, multicentric study was performed, including patients who went through parotid gland surgery for benign tumors at three different university hospitals. Seven hundred and thirty-six patients were included. The most compelling aspects related to risk escalation of FNI were as follows: (1) location, in the mid-portion of the gland, near to or above the main trunk of the facial nerve and at the top part, over the frontal or the orbital branch of the facial nerve; (2) tumor volume in the anteroposterior axis; (3) the necessity to simultaneously dissect more than one level; and (4) the requirement of an extended resection compared to a lesser extended resection. By contrast, in accordance with the ML analysis, the size of the tumor (>3 cm), as well as gender and age did not result in a determining favor in relation to the risk of FNI. The findings of this research conclude that ML models such as RF and ANN may serve evidence-based predictions from multicentric data regarding the risk of FNI. Along with the advent of ML technology, an improvement of the information regarding the potential risks of FNI associated with patients before each procedure may be achieved with the implementation of clinical, radiological, histological, and/or cytological data

    Constellation loss: Improving the efficiency of deep metric learning loss functions for the optimal embedding of histopathological images

    Get PDF
    Deep learning diagnostic algorithms are proving comparable results with human experts in a wide variety of tasks, and they still require a huge amount of well-annotated data for training, which is often non affordable. Metric learning techniques have allowed a reduction in the required annotated data allowing few-shot learning over deep learning architectures. Aims and Objectives: In this work, we analyze the state-of-the-art loss functions such as triplet loss, contrastive loss, and multi-class N-pair loss for the visual embedding extraction of hematoxylin and eosin (H&E) microscopy images and we propose a novel constellation loss function that takes advantage of the visual distances of the embeddings of the negative samples and thus, performing a regularization that increases the quality of the extracted embeddings. Materials and Methods: To this end, we employed the public H&E imaging dataset from the University Medical Center Mannheim (Germany) that contains tissue samples from low-grade and high-grade primary tumors of digitalized colorectal cancer tissue slides. These samples are divided into eight different textures (1. tumour epithelium, 2. simple stroma, 3. complex stroma, 4. immune cells, 5. debris and mucus, 6. mucosal glands, 7. adipose tissue and 8. background,). The dataset was divided randomly into train and test splits and the training split was used to train a classifier to distinguish among the different textures with just 20 training images. The process was repeated 10 times for each loss function. Performance was compared both for cluster compactness and for classification accuracy on separating the aforementioned textures. Results: Our results show that the proposed loss function outperforms the other methods by obtaining more compact clusters (Davis-Boulding: 1.41 ± 0.08, Silhouette: 0.37 ± 0.02) and better classification capabilities (accuracy: 85.0 ± 0.6) over H and E microscopy images. We demonstrate that the proposed constellation loss can be successfully used in the medical domain in situations of data scarcity.This work was partially supported by PICCOLO project. This project has received funding from the European Union’s Horizon2020 Research and Innovation Programme under grant agreement No. 732111

    Few-Shot Learning approach for plant disease classification using images taken in the field

    No full text
    Prompt plant disease detection is critical to prevent plagues and to mitigate their effects on crops. The most accurate automatic algorithms for plant disease identification using plant field images are based on deep learning. These methods require the acquisition and annotation of large image datasets, which is frequently technically or economically unfeasible. This study introduces Few-Shot Learning (FSL) algorithms for plant leaf classification using deep learning with small datasets. For the study 54,303 labeled images from the PlantVillage dataset were used, comprising 38 plant leaf and/or disease types (classes). The data was split into a source (32 classes) and a target (6 classes) domain. The Inception V3 network was fine-tuned in the source domain to learn general plant leaf characteristics. This knowledge was transferred to the target domain to learn new leaf types from few images. FSL using Siamese networks and Triplet loss was used and compared to classical fine-tuning transfer learning. The source and target domain sets were split into a training set (80%) to develop the methods and a test set (20%) to obtain the results. Algorithm performance was evaluated using the total accuracy, and the precision and recall per class. For the FSL experiments the algorithms were trained with different numbers of images per class and the experiments were repeated 20 times to statistically characterize the results. The accuracy in the source domain was 91.4% (32 classes), with a median precision/recall per class of 93.8%/92.6%. The accuracy in the target domain was 94.0% (6 classes) learning from all the training data, and the median accuracy (90% confidence interval) learning from 1 image per class was 55.5 (46.0–61.7)%. Median accuracies of 80.0 (76.4–86.5)% and 90.0 (86.1–94.2)% were reached for 15 and 80 images per class, yielding a reduction of 89.1% (80 images/class) in the training dataset with only a 4-point loss in accuracy. The FSL method outperformed the classical fine tuning transfer learning which had accuracies of 18.0 (16.0–24.0)% and 72.0 (68.0–77.3)% for 1 and 80 images per class, respectively. It is possible to learn new plant leaf and disease types with very small datasets using deep learning Siamese networks with Triplet loss, achieving almost a 90% reduction in training data needs and outperforming classical learning techniques for small training sets.This research was funded by the ELKARTEK Research Programme of the Basque Government. Project #KK-2019/00068 and through grant IT-1229-1

    Automatic Urticaria Activity Score: Deep Learning–Based Automatic Hive Counting for Urticaria Severity Assessment

    No full text
    Chronic urticaria is a chronic skin disease that affects up to 1% of the general population worldwide, with chronic spontaneous urticaria accounting for more than two-thirds of all chronic urticaria cases. The Urticaria Activity Score (UAS) is a dynamic severity assessment tool that can be incorporated into daily clinical practice, as well as clinical trials for treatments. The UAS helps in measuring disease severity and guiding the therapeutic strategy. However, UAS assessment is a time-consuming and manual process, with high interobserver variability and high dependence on the observer. To tackle this issue, we introduce Automatic UAS, an automatic equivalent of UAS that deploys a deep learning, lesion-detecting model called Legit.Health-UAS-HiveNet. Our results show that our model assesses the severity of chronic urticaria cases with a performance comparable to that of expert physicians. Furthermore, the model can be implemented into CADx systems to support doctors in their clinical practice and act as a new end point in clinical trials. This proves the usefulness of artificial intelligence in the practice of evidence-based medicine; models trained on the consensus of large clinical boards have the potential of empowering clinicians in their daily practice and replacing current standard clinical end points in clinical trials

    Artificial Neural Network as a Tool to Predict Facial Nerve Palsy in Parotid Gland Surgery for Benign Tumors

    No full text
    (1) Background: Despite the increasing use of intraoperative facial nerve monitoring during parotid gland surgery or the improvement in the preoperative radiological assessment, facial nerve injury (FNI) continues to be the most feared complication; (2) Methods: patients who underwent parotid gland surgery for benign tumors between June 2010 and June 2019 were included in this study aiming to make a proof of concept about the reliability of an artificial neural networks (AAN) algorithm for prediction of FNI and compared with a multivariate linear regression (MLR); (3) Results: Concerning prediction accuracy and performance, the ANN achieved the highest sensitivity (86.53% vs 46.23%), specificity (95.67% vs 92.59%), PPV (87.28% vs 66.94%), NPV (95.68% vs 83.37%), ROC–AUC (0.960 vs 0.769) and accuracy (93.42 vs 80.42) than MLR; and (4) Conclusions: ANN prediction models can be useful for otolaryngologists—head and neck surgeons—and patients to provide evidence-based predictions about the risk of FNI. As an advantage, the possibility to develop a calculator using clinical, radiological and histological or cytological information can improve our ability to generate patients counselling before surgery
    corecore