114 research outputs found

    Iris Recognition Using Scattering Transform and Textural Features

    Full text link
    Iris recognition has drawn a lot of attention since the mid-twentieth century. Among all biometric features, iris is known to possess a rich set of features. Different features have been used to perform iris recognition in the past. In this paper, two powerful sets of features are introduced to be used for iris recognition: scattering transform-based features and textural features. PCA is also applied on the extracted features to reduce the dimensionality of the feature vector while preserving most of the information of its initial value. Minimum distance classifier is used to perform template matching for each new test sample. The proposed scheme is tested on a well-known iris database, and showed promising results with the best accuracy rate of 99.2%

    Magnification-independent Histopathological Image Classification with Similarity-based Multi-scale Embeddings

    Full text link
    The classification of histopathological images is of great value in both cancer diagnosis and pathological studies. However, multiple reasons, such as variations caused by magnification factors and class imbalance, make it a challenging task where conventional methods that learn from image-label datasets perform unsatisfactorily in many cases. We observe that tumours of the same class often share common morphological patterns. To exploit this fact, we propose an approach that learns similarity-based multi-scale embeddings (SMSE) for magnification-independent histopathological image classification. In particular, a pair loss and a triplet loss are leveraged to learn similarity-based embeddings from image pairs or image triplets. The learned embeddings provide accurate measurements of similarities between images, which are regarded as a more effective form of representation for histopathological morphology than normal image features. Furthermore, in order to ensure the generated models are magnification-independent, images acquired at different magnification factors are simultaneously fed to networks during training for learning multi-scale embeddings. In addition to the SMSE, to eliminate the impact of class imbalance, instead of using the hard sample mining strategy that intuitively discards some easy samples, we introduce a new reinforced focal loss to simultaneously punish hard misclassified samples while suppressing easy well-classified samples. Experimental results show that the SMSE improves the performance for histopathological image classification tasks for both breast and liver cancers by a large margin compared to previous methods. In particular, the SMSE achieves the best performance on the BreakHis benchmark with an improvement ranging from 5% to 18% compared to previous methods using traditional features

    Local object patterns for representation and classification of colon tissue images

    Get PDF
    Cataloged from PDF version of article.This paper presents a new approach for the effective representation and classification of images of histopathological colon tissues stained with hematoxylin and eosin. In this approach, we propose to decompose a tissue image into its histological components and introduce a set of new texture descriptors, which we call local object patterns, on these components to model their composition within a tissue. We define these descriptors using the idea of local binary patterns, which quantify a pixel by constructing a binary string based on relative intensities of its neighbors. However, as opposed to pixel-level local binary patterns, we define our local object pattern descriptors at the component level to quantify a component. To this end, we specify neighborhoods with different locality ranges and encode spatial arrangements of the components within the specified local neighborhoods by generating strings. We then extract our texture descriptors from these strings to characterize histological components and construct the bag-of-words representation of an image from the characterized components. Working on microscopic images of colon tissues, our experiments reveal that the use of these component-level texture descriptors results in higher classification accuracies than the previous textural approaches. © 2013 IEEE

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    Deep Learning for Classification of Brain Tumor Histopathological Images

    Get PDF
    Histopathological image classification has been at the forefront of medical research. We evaluated several deep and non-deep learning models for brain tumor histopathological image classification. The challenges were characterized by an insufficient amount of training data and identical glioma features. We employed transfer learning to tackle these challenges. We also employed some state-of-the-art non-deep learning classifiers on histogram of gradient features extracted from our images, as well as features extracted using CNN activations. Data augmentation was utilized in our study. We obtained an 82% accuracy with DenseNet-201 as our best for the deep learning models and an 83.8% accuracy with ANN for the non-deep learning classifiers. The average of the diagonals of the confusion matrices for each model was calculated as their accuracy. The performance metrics criteria in this study are our model’s precision in classifying each class and their average classification accuracy. Our result emphasizes the significance of deep learning as an invaluable tool for histopathological image studies

    Advancing Content-Based Histopathological Image Retrieval Pre-Processing: A Comparative Analysis of the Effects of Color Normalization Techniques

    Get PDF
    [EN] Content-Based Histopathological Image Retrieval (CBHIR) is a search technique based on the visual content and histopathological features of whole-slide images (WSIs). CBHIR tools assist pathologists to obtain a faster and more accurate cancer diagnosis. Stain variation between hospitals hampers the performance of CBHIR tools. This paper explores the effects of color normalization (CN) in a recently proposed CBHIR approach to tackle this issue. In this paper, three different CN techniques were used on the CAMELYON17 (CAM17) data set, which is a breast cancer data set. CAM17 consists of images taken using different staining protocols and scanners in five hospitals. Our experiments reveal that a proper CN technique, which can transfer the color version into the most similar median values, has a positive impact on the retrieval performance of the proposed CBHIR framework. According to the obtained results, using CN as a pre-processing step can improve the accuracy of the proposed CBHIR framework to 97% (a 14% increase), compared to working with the original images.Color normalization; Computer-aided diagnosis (CAD); Content-based image retrieval (CBIR); Histopathological images; Whole-slide images (WSIs)Tabatabaei, Z.; Pérez Bueno, F.; Colomer, A.; Oliver Moll, J.; Molina, R.; Naranjo Ornedo, V. (2024). Advancing Content-Based Histopathological Image Retrieval Pre-Processing: A Comparative Analysis of the Effects of Color Normalization Techniques. Applied Sciences. 14(5). https://doi.org/10.3390/app1405206314
    corecore