7 research outputs found

    Extraction of Prostatic Lumina and Automated Recognition for Prostatic Calculus Image Using PCA-SVM

    Get PDF
    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi

    Multimodal microscopy for automated histologic analysis of prostate cancer

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Prostate cancer is the single most prevalent cancer in US men whose gold standard of diagnosis is histologic assessment of biopsies. Manual assessment of stained tissue of all biopsies limits speed and accuracy in clinical practice and research of prostate cancer diagnosis. We sought to develop a fully-automated multimodal microscopy method to distinguish cancerous from non-cancerous tissue samples.</p> <p>Methods</p> <p>We recorded chemical data from an unstained tissue microarray (TMA) using Fourier transform infrared (FT-IR) spectroscopic imaging. Using pattern recognition, we identified epithelial cells without user input. We fused the cell type information with the corresponding stained images commonly used in clinical practice. Extracted morphological features, optimized by two-stage feature selection method using a minimum-redundancy-maximal-relevance (mRMR) criterion and sequential floating forward selection (SFFS), were applied to classify tissue samples as cancer or non-cancer.</p> <p>Results</p> <p>We achieved high accuracy (area under ROC curve (AUC) >0.97) in cross-validations on each of two data sets that were stained under different conditions. When the classifier was trained on one data set and tested on the other data set, an AUC value of ~0.95 was observed. In the absence of IR data, the performance of the same classification system dropped for both data sets and between data sets.</p> <p>Conclusions</p> <p>We were able to achieve very effective fusion of the information from two different images that provide very different types of data with different characteristics. The method is entirely transparent to a user and does not involve any adjustment or decision-making based on spectral data. By combining the IR and optical data, we achieved high accurate classification.</p

    Computer-Aided Cancer Diagnosis and Grading via Sparse Directional Image Representations

    Get PDF
    Prostate cancer and breast cancer are the second cause of death among cancers in males and females, respectively. If not diagnosed, prostate and breast cancers can spread and metastasize to other organs and bones and make it impossible for treatment. Hence, early diagnosis of cancer is vital for patient survival. Histopathological evaluation of the tissue is used for cancer diagnosis. The tissue is taken during biopsies and stained using hematoxylin and eosin (H&E) stain. Then a pathologist looks for abnormal changes in the tissue to diagnose and grade the cancer. This process can be time-consuming and subjective. A reliable and repetitive automatic cancer diagnosis method can greatly reduce the time while producing more reliable results. The scope of this dissertation is developing computer vision and machine learning algorithms for automatic cancer diagnosis and grading methods with accuracy acceptable by the expert pathologists. Automatic image classification relies on feature representation methods. In this dissertation we developed methods utilizing sparse directional multiscale transforms - specifically shearlet transform - for medical image analysis. We particularly designed theses computer visions-based algorithms and methods to work with H&E images and MRI images. Traditional signal processing methods (e.g. Fourier transform, wavelet transform, etc.) are not suitable for detecting carcinoma cells due to their lack of directional sensitivity. However, shearlet transform has inherent directional sensitivity and multiscale framework that enables it to detect different edges in the tissue images. We developed techniques for extracting holistic and local texture features from the histological and MRI images using histogram and co-occurrence of shearlet coefficients, respectively. Then we combined these features with the color and morphological features using multiple kernel learning (MKL) algorithm and employed support vector machines (SVM) with MKL to classify the medical images. We further investigated the impact of deep neural networks in representing the medical images for cancer detection. The aforementioned engineered features have a few limitations. They lack generalizability due to being tailored to the specific texture and structure of the tissues. They are time-consuming and expensive and need prepossessing and sometimes it is difficult to extract discriminative features from the images. On the other hand, feature learning techniques use multiple processing layers and learn feature representations directly from the data. To address these issues, we have developed a deep neural network containing multiple layers of convolution, max-pooling, and fully connected layers, trained on the Red, Green, and Blue (RGB) images along with the magnitude and phase of shearlet coefficients. Then we developed a weighted decision fusion deep neural network that assigns weights on the output probabilities and update those weights via backpropagation. The final decision was a weighted sum of the decisions from the RGB, and the magnitude and the phase of shearlet networks. We used the trained networks for classification of benign and malignant H&E images and Gleason grading. Our experimental results show that our proposed methods based on feature engineering and feature learning outperform the state-of-the-art and are even near perfect (100%) for some databases in terms of classification accuracy, sensitivity, specificity, F1 score, and area under the curve (AUC) and hence are promising computer-based methods for cancer diagnosis and grading using images

    Computational phase imaging for biomedical applications

    Get PDF
    When a sample is illuminated by an imaging field, its fingerprints are left on the amplitude and the phase of the emerging wave. Capturing the information of the wavefront grants us a deeper understanding of the optical properties of the sample, and of the light-matter interaction. While the amplitude information has been intensively studied, the use of the phase information has been less common. Because all detectors are sensitive to intensity, not phase, wavefront measurements are significantly more challenging. Deploying optical interferometry to measure phase through phase-intensity conversion, quantitative phase imaging (QPI) has recently gained tremendous success in material and life sciences. The first topic of this dissertation describes our effort to develop a new QPI setup, named transmission Spatial Light Interference Microscopy (tSLIM), that uses the twisted nematic liquid-crystal (TNLC) modulators. Compared to the established SLIM technique, tSLIM is much less expensive to build than its predecessor (SLIM) while maintaining significant performance. The tSLIM system uses parallel aligned liquid-crystal (PANLC) modulators, has a slightly smaller signal-to-noise Ratio (SNR), and a more complicated model for the image formation. However, such complexity is well addressed by computing. Most importantly, tSLIM uses TNLC modulators that are popular in display LCDs. Therefore, the total cost of the system is significantly reduced. Alongside developing new imaging modalities, we also improved current QPI imaging systems. In practice, an incident field to the sample is rarely perfectly spatially coherent, i.e., plane wave. It is generally partially coherent; i.e., it comprises of many incoherent plane waves coming from multiple directions. This illumination yields artifacts in the phase measurement results, e.g., halo and phase-underestimation. One solution is using a very bright source, e.g., a laser, which can be spatially filtered very well. However, the laser comes at the expense of speckles, which degrades image quality. Therefore, solutions purely based on physical modeling and computations to remove these artifacts, using white-light illumination, are highly desirable. Here, using physical optics, we develop a theoretical model that accurately explains the effects of partial coherence on image information and phase information. The model is further combined with numerical processing to suppress the artifacts, and recover the correct phase information. The third topic is devoted to applying QPI to clinical applications. Traditionally, stained tissues are used in prostate cancer diagnosis instead. The reason is that tissue samples used in diagnosis are nearly transparent under bright field inspection if unstained. Contrast-enhanced microscopy techniques, e.g., phase contrast microscopy (PC) and differential interference contrast microscopy (DIC), can render visibility of the untagged samples with high throughput. However, since these methods are intensity-based, the contrast of acquired images varies significantly from one imaging facility to another, preventing them from being used in diagnosis. Inheriting the merits of PC, SLIM produces phase maps, which measure the refractive index of label-free samples. However, the maps measured by SLIM are not affected by variation in imaging conditions, e.g., illumination, magnification, etc., allowing consistent imaging results when using SLIM across different clinical institutions. Here, we combine SLIM images with machine learning for automatic diagnosis results for prostate cancer. We focus on two diagnosis problems of automatic Gleason grading and cancer vs. non-cancer diagnosis. Finally, we introduce a new imaging modality, named Gradient Light Interference Microscopy (GLIM), which is able to image through optically thick samples using low spatial coherence illumination. The key benefit of GLIM comes from a large numerical aperture of the condenser, which is 0.55 NA, about five times higher than that in SLIM. GLIM has an excellent depth sectioning when recording three-dimensional information of the susceptibility of the sample. We also introduce a model for the image formation of GLIM with an implication that a simple filtering step in the transverse dimension can dramatically improve the sectioning in the axial dimension. With GLIM, one can measure accurately the surface area, volume, and dry mass of a variety of biological samples, ranging from cells that are about tens of microns thick to bovine embryos that are hundreds of microns thick

    The Artificial Intelligence in Digital Pathology and Digital Radiology: Where Are We?

    Get PDF
    This book is a reprint of the Special Issue entitled "The Artificial Intelligence in Digital Pathology and Digital Radiology: Where Are We?". Artificial intelligence is extending into the world of both digital radiology and digital pathology, and involves many scholars in the areas of biomedicine, technology, and bioethics. There is a particular need for scholars to focus on both the innovations in this field and the problems hampering integration into a robust and effective process in stable health care models in the health domain. Many professionals involved in these fields of digital health were encouraged to contribute with their experiences. This book contains contributions from various experts across different fields. Aspects of the integration in the health domain have been faced. Particular space was dedicated to overviewing the challenges, opportunities, and problems in both radiology and pathology. Clinal deepens are available in cardiology, the hystopathology of breast cancer, and colonoscopy. Dedicated studies were based on surveys which investigated students and insiders, opinions, attitudes, and self-perception on the integration of artificial intelligence in this field

    Tree-structured grading of pathological images of prostate

    No full text
    corecore