368 research outputs found

    HIERARCHICAL LEARNING OF DISCRIMINATIVE FEATURES AND CLASSIFIERS FOR LARGE-SCALE VISUAL RECOGNITION

    Get PDF
    Enabling computers to recognize objects present in images has been a long standing but tremendously challenging problem in the field of computer vision for decades. Beyond the difficulties resulting from huge appearance variations, large-scale visual recognition poses unprecedented challenges when the number of visual categories being considered becomes thousands, and the amount of images increases to millions. This dissertation contributes to addressing a number of the challenging issues in large-scale visual recognition. First, we develop an automatic image-text alignment method to collect massive amounts of labeled images from the Web for training visual concept classifiers. Specif- ically, we first crawl a large number of cross-media Web pages containing Web images and their auxiliary texts, and then segment them into a collection of image-text pairs. We then show that near-duplicate image clustering according to visual similarity can significantly reduce the uncertainty on the relatedness of Web images’ semantics to their auxiliary text terms or phrases. Finally, we empirically demonstrate that ran- dom walk over a newly proposed phrase correlation network can help to achieve more precise image-text alignment by refining the relevance scores between Web images and their auxiliary text terms. Second, we propose a visual tree model to reduce the computational complexity of a large-scale visual recognition system by hierarchically organizing and learning the classifiers for a large number of visual categories in a tree structure. Compared to previous tree models, such as the label tree, our visual tree model does not require training a huge amount of classifiers in advance which is computationally expensive. However, we experimentally show that the proposed visual tree achieves results that are comparable or even better to other tree models in terms of recognition accuracy and efficiency. Third, we present a joint dictionary learning (JDL) algorithm which exploits the inter-category visual correlations to learn more discriminative dictionaries for image content representation. Given a group of visually correlated categories, JDL simul- taneously learns one common dictionary and multiple category-specific dictionaries to explicitly separate the shared visual atoms from the category-specific ones. We accordingly develop three classification schemes to make full use of the dictionaries learned by JDL for visual content representation in the task of image categoriza- tion. Experiments on two image data sets which respectively contain 17 and 1,000 categories demonstrate the effectiveness of the proposed algorithm. In the last part of the dissertation, we develop a novel data-driven algorithm to quantitatively characterize the semantic gaps of different visual concepts for learning complexity estimation and inference model selection. The semantic gaps are estimated directly in the visual feature space since the visual feature space is the common space for concept classifier training and automatic concept detection. We show that the quantitative characterization of the semantic gaps helps to automatically select more effective inference models for classifier training, which further improves the recognition accuracy rates

    HEp-2 Cell Classification with heterogeneous classes-processes based on K-Nearest Neighbours

    Get PDF
    We present a scheme for the feature extraction and classification of the fluorescence staining patterns of HEp-2 cells in IIF images. We propose a set of complementary processes specific to each class of patterns to search. Our set of processes consists of preprocessing,features extraction and classification. The choice of methods, features and parameters was performed automatically, using the Mean Class Accuracy (MCA) as a figure of merit. We extract a large number (108) of features able to fully characterize the staining pattern of HEp-2 cells. We propose a classification approach based on two steps: the first step follows the one-against-all(OAA) scheme, while the second step follows the one-against-one (OAO) scheme. To do this, we needed to implement 21 KNN classifiers: 6 OAA and 15 OAO. Leave-one-out image cross validation method was used for the evaluation of the results

    Target recognition for synthetic aperture radar imagery based on convolutional neural network feature fusion

    Get PDF
    Driven by the great success of deep convolutional neural networks (CNNs) that are currently used by quite a few computer vision applications, we extend the usability of visual-based CNNs into the synthetic aperture radar (SAR) data domain without employing transfer learning. Our SAR automatic target recognition (ATR) architecture efficiently extends the pretrained Visual Geometry Group CNN from the visual domain into the X-band SAR data domain by clustering its neuron layers, bridging the visual—SAR modality gap by fusing the features extracted from the hidden layers, and by employing a local feature matching scheme. Trials on the moving and stationary target acquisition dataset under various setups and nuisances demonstrate a highly appealing ATR performance gaining 100% and 99.79% in the 3-class and 10-class ATR problem, respectively. We also confirm the validity, robustness, and conceptual coherence of the proposed method by extending it to several state-of-the-art CNNs and commonly used local feature similarity/match metrics

    APPLICATION OF DEEP CONVOLUTIONAL NETWORK FOR THE CLASSIFICATION OF AUTO IMMUNE DISEASE

    Get PDF
    Abstract—Indirect Immuno Fluorescence (IIF) detection analysis technique is in limelight because of its great importance in the field of medical health. It is mainly used for the analysis of auto-immune diseases. These diseases are caused when body’s natural defense system can’t distinguish between normal body cells and foreign cells. More than 80 auto-immune diseases exist in humans which affect different parts of body. IIF works both manually as well as by using Computer-Aided Diagnosis (CAD). The aim of research is to propose an advanced methodology for the analysis of auto-immune diseases by using well-known model of transfer learning for the analysis of autoimmune diseases. Data augmentation and data normalization is also used to resolve the problem of over fitting in data. Firstly, freely available MIVIA data set of HEP- type 2 cells has been selected, which contains total of 1457 images and six different classes of staining patterns named as centromere, homogeneous, nucleolar, coarse speckled, fine speckled and cytoplasmatic. Then well-known model of transfer learning VGG-16 are train on MIVIA data set of HEP-type 2 cells. Data augmentation and data normalization used on pre-trained data to avoid over fitting because datasets of medical images are not very large. After the application of data augmentation and data normalization on pre-trained model, the performance of model is used to calculate by using a confusion matrix of VGG-16. VGG-16 achieves 84.375% accuracy. It is more suitable for the analysis of auto-immune diseases. Same as accuracy, we also use the other three parameters, Precision, F1 measures, and recall to check the performance of model. All four parameters use confusion matrix to find performance of model. The tools and languages also have great importance because it gives a simple and easy way of implementation to solve problems in image processing. For this purpose, python and colab is used to read and write the data because python provides fast execution of data and colab work as a simulator of python. The result shows that transfer learning is the most sufficient and enhanced technique for the analysis of auto-immune diseases since it provides high accuracy in less time and reduces the errors in image classification

    Deep CNN for IIF Images Classification in Autoimmune Diagnostics

    Get PDF
    The diagnosis and monitoring of autoimmune diseases are very important problem in medicine. The most used test for this purpose is the antinuclear antibody (ANA) test. An indirect immunofluorescence (IIF) test performed by Human Epithelial type 2 (HEp-2) cells as substrate antigen is the most common methods to determine ANA. In this paper we present an automatic HEp-2 specimen system based on a convolutional neural network method able to classify IIF images. The system consists of a module for features extraction based on a pre-trained AlexNet network and a classification phase for the cell-pattern association using six support vector machines and a k-nearest neighbors classifier. The classification at the image-level was obtained by analyzing the pattern prevalence at cell-level. The layers of the pre-trained network and various system parameters were evaluated in order to optimize the process. This system has been developed and tested on the HEp-2 images indirect immunofluorescence images analysis (I3A) public database. To test the generalisation performance of the method, the leave-one-specimen-out procedure was used in this work. The performance analysis showed an accuracy of 96.4% and a mean class accuracy equal to 93.8%. The results have been evaluated comparing them with some of the most representative works using the same database

    “SEGMENTATION OF ANTI NEUTROPHIL CYTOPLASMIC ANTIBODIES (ANCA) IMAGES BASED ON WATERSHED AND WAVELET”

    Get PDF
    Autoimmune disease is a type of disease where immune system unable to tell between the good side and bad side which lead to the misguided attack on the healthy cells and tissues. Autoimmune disease can be classified to more than 80 types depending on the affected area. The test also varies according to the suspected type of disease. Some examples of the test are Enzyme-Linked Immunosorbent Assay (ELISA) test, Indirect Immunofluorescence (IIF) test of Antinuclear Antibody (ANA) by using HeP-2 Cells and IIF test for Anti Neutrophil Cytoplasmic Antibodies (ANCA). However in this project, author only focus on the ANCA images with two major staining patterns which are P-ANCA and C-ANCA. Currently the positivity of the images depends solely on the experience of the physician which led to variety of result and lack of reliability. Besides the time to get the result is time consuming. Thus an automatic classification system has been developed to overcome the manual process. The vital process inside the automatic system is the segmentation part. Many researchers suggest different techniques of segmentation to segment the ANCA images before being further processed. In this research, author focus on Watershed technique to segment the ANCA images by implementing the algorithm in Matlab. Author use Wavelet transform to suppress noise to avoid from over segmentation of the ANCA images. Using Rand Index method, the result of segmentations is verified. Combination of Watershed and Wavelet transform gives a very promising result. Recommendation for future work is to explore on automatic determination of noise variance inside images
    • …
    corecore