34,844 research outputs found

    Dental pathology detection in 3D cone-beam CT

    Full text link
    Cone-beam computed tomography (CBCT) is a valuable imaging method in dental diagnostics that provides information not available in traditional 2D imaging. However, interpretation of CBCT images is a time-consuming process that requires a physician to work with complicated software. In this work we propose an automated pipeline composed of several deep convolutional neural networks and algorithmic heuristics. Our task is two-fold: a) find locations of each present tooth inside a 3D image volume, and b) detect several common tooth conditions in each tooth. The proposed system achieves 96.3\% accuracy in tooth localization and an average of 0.94 AUROC for 6 common tooth conditions

    Automated Diagnosis of Lymphoma with Digital Pathology Images Using Deep Learning

    Full text link
    Recent studies have shown promising results in using Deep Learning to detect malignancy in whole slide imaging. However, they were limited to just predicting positive or negative finding for a specific neoplasm. We attempted to use Deep Learning with a convolutional neural network algorithm to build a lymphoma diagnostic model for four diagnostic categories: benign lymph node, diffuse large B cell lymphoma, Burkitt lymphoma, and small lymphocytic lymphoma. Our software was written in Python language. We obtained digital whole slide images of Hematoxylin and Eosin stained slides of 128 cases including 32 cases for each diagnostic category. Four sets of 5 representative images, 40x40 pixels in dimension, were taken for each case. A total of 2,560 images were obtained from which 1,856 were used for training, 464 for validation and 240 for testing. For each test set of 5 images, the predicted diagnosis was combined from prediction of 5 images. The test results showed excellent diagnostic accuracy at 95% for image-by-image prediction and at 10% for set-by-set prediction. This preliminary study provided a proof of concept for incorporating automated lymphoma diagnostic screen into future pathology workflow to augment the pathologists' productivity.Comment: 13 pages, 2 figures, 2 table

    Exclusive Independent Probability Estimation using Deep 3D Fully Convolutional DenseNets: Application to IsoIntense Infant Brain MRI Segmentation

    Full text link
    The most recent fast and accurate image segmentation methods are built upon fully convolutional deep neural networks. In this paper, we propose new deep learning strategies for DenseNets to improve segmenting images with subtle differences in intensity values and features. We aim to segment brain tissue on infant brain MRI at about 6 months of age where white matter and gray matter of the developing brain show similar T1 and T2 relaxation times, thus appear to have similar intensity values on both T1- and T2-weighted MRI scans. Brain tissue segmentation at this age is, therefore, very challenging. To this end, we propose an exclusive multi-label training strategy to segment the mutually exclusive brain tissues with similarity loss functions that automatically balance the training based on class prevalence. Using our proposed training strategy based on similarity loss functions and patch prediction fusion we decrease the number of parameters in the network, reduce the complexity of the training process focusing the attention on less number of tasks, while mitigating the effects of data imbalance between labels and inaccuracies near patch borders. By taking advantage of these strategies we were able to perform fast image segmentation (90 seconds per 3D volume), using a network with less parameters than many state-of-the-art networks, overcoming issues such as 3Dvs2D training and large vs small patch size selection, while achieving the top performance in segmenting brain tissue among all methods tested in first and second round submissions of the isointense infant brain MRI segmentation (iSeg) challenge according to the official challenge test results. Our proposed strategy improves the training process through balanced training and by reducing its complexity while providing a trained model that works for any size input image and is fast and more accurate than many state-of-the-art methods

    NSML: Meet the MLaaS platform with a real-world case study

    Full text link
    The boom of deep learning induced many industries and academies to introduce machine learning based approaches into their concern, competitively. However, existing machine learning frameworks are limited to sufficiently fulfill the collaboration and management for both data and models. We proposed NSML, a machine learning as a service (MLaaS) platform, to meet these demands. NSML helps machine learning work be easily launched on a NSML cluster and provides a collaborative environment which can afford development at enterprise scale. Finally, NSML users can deploy their own commercial services with NSML cluster. In addition, NSML furnishes convenient visualization tools which assist the users in analyzing their work. To verify the usefulness and accessibility of NSML, we performed some experiments with common examples. Furthermore, we examined the collaborative advantages of NSML through three competitions with real-world use cases

    Computer-aided diagnosis in histopathological images of the endometrium using a convolutional neural network and attention mechanisms

    Full text link
    Uterine cancer, also known as endometrial cancer, can seriously affect the female reproductive organs, and histopathological image analysis is the gold standard for diagnosing endometrial cancer. However, due to the limited capability of modeling the complicated relationships between histopathological images and their interpretations, these computer-aided diagnosis (CADx) approaches based on traditional machine learning algorithms often failed to achieve satisfying results. In this study, we developed a CADx approach using a convolutional neural network (CNN) and attention mechanisms, called HIENet. Because HIENet used the attention mechanisms and feature map visualization techniques, it can provide pathologists better interpretability of diagnoses by highlighting the histopathological correlations of local (pixel-level) image features to morphological characteristics of endometrial tissue. In the ten-fold cross-validation process, the CADx approach, HIENet, achieved a 76.91 ±\pm 1.17% (mean ±\pm s. d.) classification accuracy for four classes of endometrial tissue, namely normal endometrium, endometrial polyp, endometrial hyperplasia, and endometrial adenocarcinoma. Also, HIENet achieved an area-under-the-curve (AUC) of 0.9579 ±\pm 0.0103 with an 81.04 ±\pm 3.87% sensitivity and 94.78 ±\pm 0.87% specificity in a binary classification task that detected endometrioid adenocarcinoma (Malignant). Besides, in the external validation process, HIENet achieved an 84.50% accuracy in the four-class classification task, and it achieved an AUC of 0.9829 with a 77.97% (95% CI, 65.27%-87.71%) sensitivity and 100% (95% CI, 97.42%-100.00%) specificity. In summary, the proposed CADx approach, HIENet, outperformed three human experts and four end-to-end CNN-based classifiers on this small-scale dataset composed of 3,500 hematoxylin and eosin (H&E) images regarding overall classification performance.Comment: 22 pages, 8 figures, and 4 table

    Towards the Augmented Pathologist: Challenges of Explainable-AI in Digital Pathology

    Full text link
    Digital pathology is not only one of the most promising fields of diagnostic medicine, but at the same time a hot topic for fundamental research. Digital pathology is not just the transfer of histopathological slides into digital representations. The combination of different data sources (images, patient records, and *omics data) together with current advances in artificial intelligence/machine learning enable to make novel information accessible and quantifiable to a human expert, which is not yet available and not exploited in current medical settings. The grand goal is to reach a level of usable intelligence to understand the data in the context of an application task, thereby making machine decisions transparent, interpretable and explainable. The foundation of such an "augmented pathologist" needs an integrated approach: While machine learning algorithms require many thousands of training examples, a human expert is often confronted with only a few data points. Interestingly, humans can learn from such few examples and are able to instantly interpret complex patterns. Consequently, the grand goal is to combine the possibilities of artificial intelligence with human intelligence and to find a well-suited balance between them to enable what neither of them could do on their own. This can raise the quality of education, diagnosis, prognosis and prediction of cancer and other diseases. In this paper we describe some (incomplete) research issues which we believe should be addressed in an integrated and concerted effort for paving the way towards the augmented pathologist

    Automatic catheter detection in pediatric X-ray images using a scale-recurrent network and synthetic data

    Full text link
    Catheters are commonly inserted life supporting devices. X-ray images are used to assess the position of a catheter immediately after placement as serious complications can arise from malpositioned catheters. Previous computer vision approaches to detect catheters on X-ray images either relied on low-level cues that are not sufficiently robust or only capable of processing a limited number or type of catheters. With the resurgence of deep learning, supervised training approaches are begining to showing promising results. However, dense annotation maps are required, and the work of a human annotator is hard to scale. In this work, we proposed a simple way of synthesizing catheters on X-ray images and a scale recurrent network for catheter detection. By training on adult chest X-rays, the proposed network exhibits promising detection results on pediatric chest/abdomen X-rays in terms of both precision and recall.Comment: accepted to the 1st Conference on Medical Imaging with Deep Learning (MIDL2018), Amsterdam, The Netherland

    Counting the uncountable: deep semantic density estimation from Space

    Full text link
    We propose a new method to count objects of specific categories that are significantly smaller than the ground sampling distance of a satellite image. This task is hard due to the cluttered nature of scenes where different object categories occur. Target objects can be partially occluded, vary in appearance within the same class and look alike to different categories. Since traditional object detection is infeasible due to the small size of objects with respect to the pixel size, we cast object counting as a density estimation problem. To distinguish objects of different classes, our approach combines density estimation with semantic segmentation in an end-to-end learnable convolutional neural network (CNN). Experiments show that deep semantic density estimation can robustly count objects of various classes in cluttered scenes. Experiments also suggest that we need specific CNN architectures in remote sensing instead of blindly applying existing ones from computer vision.Comment: Accepted in GCPR 201

    A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop

    Full text link
    The goal of Machine Learning to automatically learn from data, extract knowledge and to make decisions without any human intervention. Such automatic (aML) approaches show impressive success. Recent results even demonstrate intriguingly that deep learning applied for automatic classification of skin lesions is on par with the performance of dermatologists, yet outperforms the average. As human perception is inherently limited, such approaches can discover patterns, e.g. that two objects are similar, in arbitrarily high-dimensional spaces what no human is able to do. Humans can deal only with limited amounts of data, whilst big data is beneficial for aML; however, in health informatics, we are often confronted with a small number of data sets, where aML suffer of insufficient training samples and many problems are computationally hard. Here, interactive machine learning (iML) may be of help, where a human-in-the-loop contributes to reduce the complexity of NP-hard problems. A further motivation for iML is that standard black-box approaches lack transparency, hence do not foster trust and acceptance of ML among end-users. Rising legal and privacy aspects, e.g. with the new European General Data Protection Regulations, make black-box approaches difficult to use, because they often are not able to explain why a decision has been made. In this paper, we present some experiments to demonstrate the effectiveness of the human-in-the-loop approach, particularly in opening the black-box to a glass-box and thus enabling a human directly to interact with an learning algorithm. We selected the Ant Colony Optimization framework, and applied it on the Traveling Salesman Problem, which is a good example, due to its relevance for health informatics, e.g. for the study of protein folding. From studies of how humans extract so much from so little data, fundamental ML-research also may benefit.Comment: 26 pages, 5 figure

    Data Augmentation for Histopathological Images Based on Gaussian-Laplacian Pyramid Blending

    Full text link
    Data imbalance is a major problem that affects several machine learning (ML) algorithms. Such a problem is troublesome because most of the ML algorithms attempt to optimize a loss function that does not take into account the data imbalance. Accordingly, the ML algorithm simply generates a trivial model that is biased toward predicting the most frequent class in the training data. In the case of histopathologic images (HIs), both low-level and high-level data augmentation (DA) techniques still present performance issues when applied in the presence of inter-patient variability; whence the model tends to learn color representations, which is related to the staining process. In this paper, we propose a novel approach capable of not only augmenting HI dataset but also distributing the inter-patient variability by means of image blending using the Gaussian-Laplacian pyramid. The proposed approach consists of finding the Gaussian pyramids of two images of different patients and finding the Laplacian pyramids thereof. Afterwards, the left-half side and the right-half side of different HIs are joined in each level of the Laplacian pyramid, and from the joint pyramids, the original image is reconstructed. This composition combines the stain variation of two patients, avoiding that color differences mislead the learning process. Experimental results on the BreakHis dataset have shown promising gains vis-a-vis the majority of DA techniques presented in the literature.Comment: 8 page
    corecore