19,039 research outputs found

    Deep learning for quantitative motion tracking based on optical coherence tomography

    Get PDF
    Optical coherence tomography (OCT) is a cross-sectional imaging modality based on low coherence light interferometry. OCT has been widely used in diagnostic ophthalmology and has found applications in other biomedical fields such as cancer detection and surgical guidance. In the Laboratory of Biophotonics Imaging and Sensing at New Jersey Institute of Technology, we developed a unique needle OCT imager based on a single fiber probe for breast cancer imaging. The needle OCT imager with sub-millimeter diameter can be inserted into tissue for minimally invasive in situ breast imaging. OCT imaging provides spatial resolution similar to histology and has the potential to become a device to perform virtual biopsy to fast and accurate breast cancer diagnosis, because abnormal breast tissue and normal breast tissue have different characteristics in OCT image. The morphological features of OCT image are related to the microscopic structure of the tissue and the speckle pattern in OCT image is related to cellular/subcellular optical properties of the tissue. In addition, depth attenuation of OCT signal depends on the scattering and absorption properties of the tissue. However, the above described OCT image features are at different spatial scales and it is challenging for human visualization to effectively recognize these features for tissue classification. Particularly, our needle OCT imager, given its simplicity and small form factor, does not have a mechanical scanner for beam steering and relies on manual scan to generate 2D images. The nonconstant translation speed of the probe in manual scanning inevitably introduces distortion artifacts in OCT imaging, which further complicates the tissue characterization task.] OCT images of tissue samples provide comprehensive information about the morphology of normal and unhealthy tissue. Image analysis of tissue morphology can help cancer researchers develop a better understanding of cancer biology. Classification of tissue images and recovering distorted OCT images are two common tasks in tissue image analysis. In this master thesis project, a novel deep learning approach is investigated to extract beam scanning speed from different samples. Furthermore, a novel technique is investigated and tested to recover distorted OCT images. The long-term goal of this study is to achieve robust tissue classification for breast cancer diagnosis, based on a simple single fiber OCT instrument. The deep learning network utilized in this study depends on Convolutional Neural Network (CNN) and Naïve Bayes Classifier. For image retrieval, we used algorithms that extract, represent and match common features between images. The CNN network achieved accuracy of 97% in tissue type and scanning speed classification, while the image retrieval algorithms achieved very high-quality recovered image compared to the reference image

    Segmentation of Tumor Regions in Microscopic Images of Breast Cancer Tissue

    Get PDF
    Nowadays, advances in the domain of digital pathology gave the means to replace the old optical microscopes by the Whole Slide Imaging (WSI) scanners. These scanners enable pathologists viewing conventional tissue slides on a computer monitor. Currently, several applications that aim to analyze human tissue are evolving remarkably. Segmentation of tumor regions in microscopic images of breast cancer tissue in one of the application that researchers are investigating extensively. Indeed, researchers are interested in such application not only because breast cancer is one of the pervasive cancers for human beings, but also segmentation is one of the basic and frequent tasks that pathologists have to perform in order to perform tissue analysis. In this thesis, we addressed the task of segmentation of tumor regions in microscopic images of breast cancer tissue as a machine learning task. We developed different supervised and unsupervised learning frameworks. Our proposed frameworks encompass five steps: (1) pre-processing, (2) feature extraction, (3) feature reduction, (4) supervised and unsupervised learning, and (5) post-processing. We focused on the extraction of textural features, as well as utilization of supervised learning techniques. We investigated individually the MR8Fast, Gabor, and Phase Gradient features, as well as a combination of all these features. We investigated also different classifiers which are Naive Bayes, Artificial Neural Network, and Support Vector Machine, as well as a combination of the supervised learning results. We conducted different experiments in order to compare the different proposed frameworks. Therefore, we developed different conclusions. The MR8Fast features are the most discriminating features compared to the Gabor and Phase Gradient that come in the second and third place respectively. Furthermore, the Naive Bayes classifier and the combination of classification results, that have been overlooked for the segmentation of tumor regions in microscopic images of breast cancer tissue, achieved better results compared to the Support Vector Machine classifier which has been extensively employed for this task. These promising conclusions promote the need for further work to investigate other textural features as well as other classifiers

    Breast Cancer Classification Using Deep Convolutional Neural Networks

    Get PDF
    Breast cancer remains the primary causes of death for women and much effort has been depleted in the form of screening series for prevention. Given the exponential growth in the number of mammograms collected, computer-assisted diagnosis has become a necessity. Histopathological imaging is one of the methods for cancer diagnosis where Pathologists examine tissue cells under different microscopic standards but disagree on the final decision. In this context, the use of automatic image processing techniques resulting from deep learning denotes a promising avenue for assisting in the diagnosis of breast cancer. In this paper, an android software for breast cancer classification using deep learning approach based on a Convolutional Neural Network (CNN) was developed. The software aims to classify the breast tumors to benign or malignant. Experimental results on histopathological images using the BreakHis dataset shows that the DenseNet CNN model achieved high processing performances with 96% of accuracy in the breast cancer classification task when compared with state-of-the-art models

    Histopathological image analysis : a review

    Get PDF
    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe

    Context-aware stacked convolutional neural networks for classification of breast carcinomas in whole-slide histopathology images

    Full text link
    Automated classification of histopathological whole-slide images (WSI) of breast tissue requires analysis at very high resolutions with a large contextual area. In this paper, we present context-aware stacked convolutional neural networks (CNN) for classification of breast WSIs into normal/benign, ductal carcinoma in situ (DCIS), and invasive ductal carcinoma (IDC). We first train a CNN using high pixel resolution patches to capture cellular level information. The feature responses generated by this model are then fed as input to a second CNN, stacked on top of the first. Training of this stacked architecture with large input patches enables learning of fine-grained (cellular) details and global interdependence of tissue structures. Our system is trained and evaluated on a dataset containing 221 WSIs of H&E stained breast tissue specimens. The system achieves an AUC of 0.962 for the binary classification of non-malignant and malignant slides and obtains a three class accuracy of 81.3% for classification of WSIs into normal/benign, DCIS, and IDC, demonstrating its potentials for routine diagnostics
    corecore