8,260 research outputs found

    Multispectral Spatial Characterization: Application to Mitosis Detection in Breast Cancer Histopathology

    Full text link
    Accurate detection of mitosis plays a critical role in breast cancer histopathology. Manual detection and counting of mitosis is tedious and subject to considerable inter- and intra-reader variations. Multispectral imaging is a recent medical imaging technology, proven successful in increasing the segmentation accuracy in other fields. This study aims at improving the accuracy of mitosis detection by developing a specific solution using multispectral and multifocal imaging of breast cancer histopathological data. We propose to enable clinical routine-compliant quality of mitosis discrimination from other objects. The proposed framework includes comprehensive analysis of spectral bands and z-stack focus planes, detection of expected mitotic regions (candidates) in selected focus planes and spectral bands, computation of multispectral spatial features for each candidate, selection of multispectral spatial features and a study of different state-of-the-art classification methods for candidates classification as mitotic or non mitotic figures. This framework has been evaluated on MITOS multispectral medical dataset and achieved 60% detection rate and 57% F-Measure. Our results indicate that multispectral spatial features have more information for mitosis classification in comparison with white spectral band features, being therefore a very promising exploration area to improve the quality of the diagnosis assistance in histopathology

    Tuning for Tissue Image Segmentation Workflows for Accuracy and Performance

    Full text link
    We propose a software platform that integrates methods and tools for multi-objective parameter auto- tuning in tissue image segmentation workflows. The goal of our work is to provide an approach for improving the accuracy of nucleus/cell segmentation pipelines by tuning their input parameters. The shape, size and texture features of nuclei in tissue are important biomarkers for disease prognosis, and accurate computation of these features depends on accurate delineation of boundaries of nuclei. Input parameters in many nucleus segmentation workflows affect segmentation accuracy and have to be tuned for optimal performance. This is a time-consuming and computationally expensive process; automating this step facilitates more robust image segmentation workflows and enables more efficient application of image analysis in large image datasets. Our software platform adjusts the parameters of a nuclear segmentation algorithm to maximize the quality of image segmentation results while minimizing the execution time. It implements several optimization methods to search the parameter space efficiently. In addition, the methodology is developed to execute on high performance computing systems to reduce the execution time of the parameter tuning phase. Our results using three real-world image segmentation workflows demonstrate that the proposed solution is able to (1) search a small fraction (about 100 points) of the parameter space, which contains billions to trillions of points, and improve the quality of segmentation output by 1.20x, 1.29x, and 1.29x, on average; (2) decrease the execution time of a segmentation workflow by up to 11.79x while improving output quality; and (3) effectively use parallel systems to accelerate parameter tuning and segmentation phases.Comment: 29 pages, 5 figure

    Image Processing on IOPA Radiographs: A comprehensive case study on Apical Periodontitis

    Full text link
    With the recent advancements in Image Processing Techniques and development of new robust computer vision algorithms, new areas of research within Medical Diagnosis and Biomedical Engineering are picking up pace. This paper provides a comprehensive in-depth case study of Image Processing, Feature Extraction and Analysis of Apical Periodontitis diagnostic cases in IOPA (Intra Oral Peri-Apical) Radiographs, a common case in oral diagnostic pipeline. This paper provides a detailed analytical approach towards improving the diagnostic procedure with improved and faster results with higher accuracy targeting to eliminate True Negative and False Positive cases.Comment: 15 pages, 42 figures and Submitted at ICIAP 2019: 21st International Conference on Image Analysis and Processin

    Detection and classification of masses in mammographic images in a multi-kernel approach

    Full text link
    According to the World Health Organization, breast cancer is the main cause of cancer death among adult women in the world. Although breast cancer occurs indiscriminately in countries with several degrees of social and economic development, among developing and underdevelopment countries mortality rates are still high, due to low availability of early detection technologies. From the clinical point of view, mammography is still the most effective diagnostic technology, given the wide diffusion of the use and interpretation of these images. Herein this work we propose a method to detect and classify mammographic lesions using the regions of interest of images. Our proposal consists in decomposing each image using multi-resolution wavelets. Zernike moments are extracted from each wavelet component. Using this approach we can combine both texture and shape features, which can be applied both to the detection and classification of mammary lesions. We used 355 images of fatty breast tissue of IRMA database, with 233 normal instances (no lesion), 72 benign, and 83 malignant cases. Classification was performed by using SVM and ELM networks with modified kernels, in order to optimize accuracy rates, reaching 94.11%. Considering both accuracy rates and training times, we defined the ration between average percentage accuracy and average training time in a reverse order. Our proposal was 50 times higher than the ratio obtained using the best method of the state-of-the-art. As our proposed model can combine high accuracy rate with low learning time, whenever a new data is received, our work will be able to save a lot of time, hours, in learning process in relation to the best method of the state-of-the-art

    Fine-Grained Classification of Cervical Cells Using Morphological and Appearance Based Convolutional Neural Networks

    Full text link
    Fine-grained classification of cervical cells into different abnormality levels is of great clinical importance but remains very challenging. Contrary to traditional classification methods that rely on hand-crafted or engineered features, convolution neural network (CNN) can classify cervical cells based on automatically learned deep features. However, CNN in previous studies do not involve cell morphological information, and it is unknown whether morphological features can be directly modeled by CNN to classify cervical cells. This paper presents a CNN-based method that combines cell image appearance with cell morphology for classification of cervical cells in Pap smear. The training cervical cell dataset consists of adaptively re-sampled image patches coarsely centered on the nuclei. Several CNN models (AlexNet, GoogleNet, ResNet and DenseNet) pre-trained on ImageNet dataset were fine-tuned on the cervical dataset for comparison. The proposed method is evaluated on the Herlev cervical dataset by five-fold cross-validation at patient level splitting. Results show that by adding cytoplasm and nucleus masks as raw morphological information into appearance-based CNN learning, higher classification accuracies can be achieved in general. Among the four CNN models, GoogleNet fed with both morphological and appearance information obtains the highest classification accuracies of 94.5% for 2-class classification task and 64.5% for 7-class classification task. Our method demonstrates that combining cervical cell morphology with appearance information can provide improved classification performance, which is clinically important for early diagnosis of cervical dysplastic changes.Comment: 7 pages, 4 figure

    Survey of Computer Vision and Machine Learning in Gastrointestinal Endoscopy

    Full text link
    This paper attempts to provide the reader a place to begin studying the application of computer vision and machine learning to gastrointestinal (GI) endoscopy. They have been classified into 18 categories. It should be be noted by the reader that this is a review from pre-deep learning era. A lot of deep learning based applications have not been covered in this thesis

    Unsupervised Learning for Cell-level Visual Representation in Histopathology Images with Generative Adversarial Networks

    Full text link
    The visual attributes of cells, such as the nuclear morphology and chromatin openness, are critical for histopathology image analysis. By learning cell-level visual representation, we can obtain a rich mix of features that are highly reusable for various tasks, such as cell-level classification, nuclei segmentation, and cell counting. In this paper, we propose a unified generative adversarial networks architecture with a new formulation of loss to perform robust cell-level visual representation learning in an unsupervised setting. Our model is not only label-free and easily trained but also capable of cell-level unsupervised classification with interpretable visualization, which achieves promising results in the unsupervised classification of bone marrow cellular components. Based on the proposed cell-level visual representation learning, we further develop a pipeline that exploits the varieties of cellular elements to perform histopathology image classification, the advantages of which are demonstrated on bone marrow datasets.Comment: Accepted for publication in IEEE Journal of Biomedical and Health Informatic

    A Complete System for Candidate Polyps Detection in Virtual Colonoscopy

    Full text link
    Computer tomographic colonography, combined with computer-aided detection, is a promising emerging technique for colonic polyp analysis. We present a complete pipeline for polyp detection, starting with a simple colon segmentation technique that enhances polyps, followed by an adaptive-scale candidate polyp delineation and classification based on new texture and geometric features that consider both the information in the candidate polyp location and its immediate surrounding area. The proposed system is tested with ground truth data, including flat and small polyps which are hard to detect even with optical colonoscopy. For polyps larger than 6mm in size we achieve 100% sensitivity with just 0.9 false positives per case, and for polyps larger than 3mm in size we achieve 93% sensitivity with 2.8 false positives per case

    Coarse-to-Fine Classification via Parametric and Nonparametric Models for Computer-Aided Diagnosis

    Full text link
    Classification is one of the core problems in Computer-Aided Diagnosis (CAD), targeting for early cancer detection using 3D medical imaging interpretation. High detection sensitivity with desirably low false positive (FP) rate is critical for a CAD system to be accepted as a valuable or even indispensable tool in radiologists' workflow. Given various spurious imagery noises which cause observation uncertainties, this remains a very challenging task. In this paper, we propose a novel, two-tiered coarse-to-fine (CTF) classification cascade framework to tackle this problem. We first obtain classification-critical data samples (e.g., samples on the decision boundary) extracted from the holistic data distributions using a robust parametric model (e.g., \cite{Raykar08}); then we build a graph-embedding based nonparametric classifier on sampled data, which can more accurately preserve or formulate the complex classification boundary. These two steps can also be considered as effective "sample pruning" and "feature pursuing + kkNN/template matching", respectively. Our approach is validated comprehensively in colorectal polyp detection and lung nodule detection CAD systems, as the top two deadly cancers, using hospital scale, multi-site clinical datasets. The results show that our method achieves overall better classification/detection performance than existing state-of-the-art algorithms using single-layer classifiers, such as the support vector machine variants \cite{Wang08}, boosting \cite{Slabaugh10}, logistic regression \cite{Ravesteijn10}, relevance vector machine \cite{Raykar08}, kk-nearest neighbor \cite{Murphy09} or spectral projections on graph \cite{Cai08}

    3D Contouring for Breast Tumor in Sonography

    Full text link
    Malignant and benign breast tumors present differently in their shape and size on sonography. Morphological information provided by tumor contours are important in clinical diagnosis. However, ultrasound images contain noises and tissue texture; clinical diagnosis thus highly depends on the experience of physicians. The manual way to sketch three-dimensional (3D) contours of breast tumor is a time-consuming and complicate task. If automatic contouring could provide a precise breast tumor contour that might assist physicians in making an accurate diagnosis. This study presents an efficient method for automatically contouring breast tumors in 3D sonography. The proposed method utilizes an efficient segmentation procedure, i.e. level-set method (LSM), to automatic detect contours of breast tumors. This study evaluates 20 cases comprising ten benign and ten malignant tumors. The results of computer simulation reveal that the proposed 3D segmentation method provides robust contouring for breast tumor on ultrasound images. This approach consistently obtains contours similar to those obtained by manual contouring of the breast tumor and can save much of the time required to sketch precise contours.Comment: 18 pages, 1 table and 5 figure
    • …
    corecore