566 research outputs found

    Automatic Classification of Human Epithelial Type 2 Cell Indirect Immunofluorescence Images using Cell Pyramid Matching

    Get PDF
    This paper describes a novel system for automatic classification of images obtained from Anti-Nuclear Antibody (ANA) pathology tests on Human Epithelial type 2 (HEp-2) cells using the Indirect Immunofluorescence (IIF) protocol. The IIF protocol on HEp-2 cells has been the hallmark method to identify the presence of ANAs, due to its high sensitivity and the large range of antigens that can be detected. However, it suffers from numerous shortcomings, such as being subjective as well as time and labour intensive. Computer Aided Diagnostic (CAD) systems have been developed to address these problems, which automatically classify a HEp-2 cell image into one of its known patterns (eg. speckled, homogeneous). Most of the existing CAD systems use handpicked features to represent a HEp-2 cell image, which may only work in limited scenarios. We propose a novel automatic cell image classification method termed Cell Pyramid Matching (CPM), which is comprised of regional histograms of visual words coupled with the Multiple Kernel Learning framework. We present a study of several variations of generating histograms and show the efficacy of the system on two publicly available datasets: the ICPR HEp-2 cell classification contest dataset and the SNPHEp-2 dataset.Comment: arXiv admin note: substantial text overlap with arXiv:1304.126

    Local and deep texture features for classification of natural and biomedical images

    Get PDF
    Developing efficient feature descriptors is very important in many computer vision applications including biomedical image analysis. In the past two decades and before the popularity of deep learning approaches in image classification, texture features proved to be very effective to capture the gradient variation in the image. Following the success of the Local Binary Pattern (LBP) descriptor, many variations of this descriptor were introduced to further improve the ability of obtaining good classification results. However, the problem of image classification gets more complicated when the number of images increases as well as the number of classes. In this case, more robust approaches must be used to address this problem. In this thesis, we address the problem of analyzing biomedical images by using a combination of local and deep features. First, we propose a novel descriptor that is based on the motif Peano scan concept called Joint Motif Labels (JML). After that, we combine the features extracted from the JML descriptor with two other descriptors called Rotation Invariant Co-occurrence among Local Binary Patterns (RIC-LBP) and Joint Adaptive Medina Binary Patterns (JAMBP). In addition, we construct another descriptor called Motif Patterns encoded by RIC-LBP and use it in our classification framework. We enrich the performance of our framework by combining these local descriptors with features extracted from a pre-trained deep network called VGG-19. Hence, the 4096 features of the Fully Connected 'fc7' layer are extracted and combined with the proposed local descriptors. Finally, we show that Random Forests (RF) classifier can be used to obtain superior performance in the field of biomedical image analysis. Testing was performed on two standard biomedical datasets and another three standard texture datasets. Results show that our framework can beat state-of-the-art accuracy on the biomedical image analysis and the combination of local features produce promising results on the standard texture datasets.Includes bibliographical reference

    Studying the Applicability of Generative Adversarial Networks on HEp-2 Cell Image Augmentation

    Get PDF
    The Anti-Nuclear Antibodies (ANAs) testing is the primary serological diagnosis screening test for autoimmune diseases. ANAs testing is conducted mainly by the Indirect Immunofluorescence (IIF) on Human Epithelial cell-substrate (HEp-2) protocol. However, due to its high variability, human-subjectivity, and low throughput, there is an insistent need to develop an efficient Computer-Aided Diagnosis system (CADs) to automate this protocol. Many recently proposed Convolutional Neural Networks (CNNs) demonstrated promising results in HEp-2 cell image classification, which is the main task of the HE-p2 IIF protocol. However, the lack of large labeled datasets is still the main challenge in this field. This work provides a detailed study of the applicability of using generative adversarial networks (GANs) algorithms as an augmentation method. Different types of GANs were employed to synthesize HEp-2 cell images to address the data scarcity problem. For systematic comparison, empirical quantitative metrics were implemented to evaluate different GAN models' performance of learning the real data representations. The results of this work showed that though the high visual similarity with the real images, GANs' capacity to generate diverse data is still limited. This deficiency in the generated data diversity is found to be of a crucial impact when used as a standalone method for augmentation. However, combining limited-size GANs-generated data with classic augmentation improves the classification accuracy across different variants of CNNs. Our results demonstrated a competitive performance for the overall classification accuracy and the mean class accuracy of the HEp-2 cell image classification task

    Biological cells classification using bio-inspired descriptor in a boosting k-NN framework

    Get PDF
    International audienceHigh-content imaging is an emerging technology for the analysis and quantification of biological phenomena. Thus, classifying a huge number of cells or quantifying markers from large sets of images by experts is a very time-consuming and poorly reproducible task. In order to overcome such limitations, we propose a supervised method for automatic cell classification. Our approach consists of two steps: the first one is an indexing stage based on specific bio-inspired features relying on the distribution of contrast information on segmented cells. The second one is a supervised learning stage that selects the prototypical samples best representing the cell categories. These prototypes are used in a leveraged k-NN framework to predict the class of unlabeled cells. In this paper we have tested our new learning algorithm on cellular images acquired for the analysis of pathologies. In order to evaluate the automatic classification performances, we tested our algorithm on the HEp2 Cells dataset of (Foggia et al, CBMS 2010). Results are very promising, showing classification precision larger than 96% on average, thus suggesting our method as a valuable decision-support tool in such cellular imaging applications

    Playing Tag with ANN: Boosted Top Identification with Pattern Recognition

    Get PDF
    Many searches for physics beyond the Standard Model at the Large Hadron Collider (LHC) rely on top tagging algorithms, which discriminate between boosted hadronic top quarks and the much more common jets initiated by light quarks and gluons. We note that the hadronic calorimeter (HCAL) effectively takes a "digital image" of each jet, with pixel intensities given by energy deposits in individual HCAL cells. Viewed in this way, top tagging becomes a canonical pattern recognition problem. With this motivation, we present a novel top tagging algorithm based on an Artificial Neural Network (ANN), one of the most popular approaches to pattern recognition. The ANN is trained on a large sample of boosted tops and light quark/gluon jets, and is then applied to independent test samples. The ANN tagger demonstrated excellent performance in a Monte Carlo study: for example, for jets with p_T in the 1100-1200 GeV range, 60% top-tag efficiency can be achieved with a 4% mis-tag rate. We discuss the physical features of the jets identified by the ANN tagger as the most important for classification, as well as correlations between the ANN tagger and some of the familiar top-tagging observables and algorithms.Comment: 20 pages, 9 figure
    • …
    corecore