36,458 research outputs found

    Supervised Spectral Subspace Clustering for Visual Dictionary Creation in the Context of Image Classification

    No full text
    International audienceWhen building traditional Bag of Visual Words (BOW) for image classification, the K-means algorithm is usually used on a large set of high dimensional local descriptors to build the visual dictionary. However, it is very likely that, to find a good visual vocabulary, only a sub-part of the descriptor space of each visual word is truly relevant. We propose a novel framework for creating the visual dictionary based on a spectral subspace clustering method instead of the traditional K-means algorithm. A strategy for adding supervised information during the subspace clustering process is formulated to obtain more discriminative visual words. Experimental results on real world image dataset show that the proposed framework for dictionary creation improves the classification accuracy compared to using traditionally built BOW

    Spatial histograms of soft pairwise similar patches to improve the bag-of-visual-words model

    No full text
    International audienceIn the context of category level scene classification, the bag-of-visual-words model (BoVW) is widely used for image representation. This model is appearance based and does not contain any information regarding the arrangement of the visual words in the 2D image space. To overcome this problem, recent approaches try to capture information about either the absolute or the relative spatial location of visual words. In the first category, the so-called Spatial Pyramid Representation (SPR) is very popular thanks to its simplicity and good results. Alternatively, adding information about occurrences of relative spatial configurations of visual words was proven to be effective but at the cost of higher computational complexity, specifically when relative distance and angles are taken into account. In this paper, we introduce a novel way to incorporate both distance and angle information in the BoVW representation. The novelty is first to provide a computationally efficient representation adding relative spatial information between visual words and second to use a soft pairwise voting scheme based on the distance in the descriptor space. Experiments on challenging data sets MSRC-2, 15Scene, Caltech101, Caltech256 and Pascal VOC 2007 demonstrate that our method outperforms or is competitive with concurrent ones. We also show that it provides important complementary information to the spatial pyramid matching and can improve the overall performance

    An efficient method to classify GI tract images from WCE using visual words

    Get PDF
    The digital images made with the Wireless Capsule Endoscopy (WCE) from the patient's gastrointestinal tract are used to forecast abnormalities. The big amount of information from WCE pictures could take 2 hours to review GI tract illnesses per patient to research the digestive system and evaluate them. It is highly time consuming and increases healthcare costs considerably. In order to overcome this problem, the CS-LBP (Center Symmetric Local Binary Pattern) and the ACC (Auto Color Correlogram) were proposed to use a novel method based on a visual bag of features (VBOF). In order to solve this issue, we suggested a Visual Bag of Features(VBOF) method by incorporating Scale Invariant Feature Transform (SIFT), Center-Symmetric Local Binary Pattern (CS-LBP) and Auto Color Correlogram (ACC). This combination of features is able to detect the interest point, texture and color information in an image. Features for each image are calculated to create a descriptor with a large dimension. The proposed feature descriptors are clustered by K- means referred to as visual words, and the Support Vector Machine (SVM) method is used to automatically classify multiple disease abnormalities from the GI tract. Finally, post-processing scheme is applied to deal with final classification results i.e. validated the performance of multi-abnormal disease frame detection

    Investigation of new feature descriptors for image search and classification

    Get PDF
    Content-based image search, classification and retrieval is an active and important research area due to its broad applications as well as the complexity of the problem. Understanding the semantics and contents of images for recognition remains one of the most difficult and prevailing problems in the machine intelligence and computer vision community. With large variations in size, pose, illumination and occlusions, image classification is a very challenging task. A good classification framework should address the key issues of discriminatory feature extraction as well as efficient and accurate classification. Towards that end, this dissertation focuses on exploring new image descriptors by incorporating cues from the human visual system, and integrating local, texture, shape as well as color information to construct robust and effective feature representations for advancing content-based image search and classification. Based on the Gabor wavelet transformation, whose kernels are similar to the 2D receptive field profiles of the mammalian cortical simple cells, a series of new image descriptors is developed. Specifically, first, a new color Gabor-HOG (GHOG) descriptor is introduced by concatenating the Histograms of Oriented Gradients (HOG) of the component images produced by applying Gabor filters in multiple scales and orientations to encode shape information. Second, the GHOG descriptor is analyzed in six different color spaces and grayscale to propose different color GHOG descriptors, which are further combined to present a new Fused Color GHOG (FC-GHOG) descriptor. Third, a novel GaborPHOG (GPHOG) descriptor is proposed which improves upon the Pyramid Histograms of Oriented Gradients (PHOG) descriptor, and subsequently a new FC-GPHOG descriptor is constructed by combining the multiple color GPHOG descriptors and employing the Principal Component Analysis (PCA). Next, the Gabor-LBP (GLBP) is derived by accumulating the Local Binary Patterns (LBP) histograms of the local Gabor filtered images to encode texture and local information of an image. Furthermore, a novel Gabor-LBPPHOG (GLP) image descriptor is proposed which integrates the GLBP and the GPHOG descriptors as a feature set and an innovative Fused Color Gabor-LBP-PHOG (FC-GLP) is constructed by fusing the GLP from multiple color spaces. Subsequently, The GLBP and the GHOG descriptors are then combined to produce the Gabor-LBP-HOG (GLH) feature vector which performs well on different object and scene image categories. The six color GLH vectors are further concatenated to form the Fused Color GLH (FC-GLH) descriptor. Finally, the Wigner based Local Binary Patterns (WLBP) descriptor is proposed that combines multi-neighborhood LBP, Pseudo-Wigner distribution of images and the popular bag of words model to effectively classify scene images. To assess the feasibility of the proposed new image descriptors, two classification methods are used: one method applies the PCA and the Enhanced Fisher Model (EFM) for feature extraction and the nearest neighbor rule for classification, while the other method employs the Support Vector Machine (SVM). The classification performance of the proposed descriptors is tested on several publicly available popular image datasets. The experimental results show that the proposed new image descriptors achieve image search and classification results better than or at par with other popular image descriptors, such as the Scale Invariant Feature Transform (SIFT), the Pyramid Histograms of visual Words (PHOW), the Pyramid Histograms of Oriented Gradients (PHOG), the Spatial Envelope (SE), the Color SIFT four Concentric Circles (C4CC), the Object Bank (OB), the Context Aware Topic Model (CA-TM), the Hierarchical Matching Pursuit (HMP), the Kernel Spatial Pyramid Matching (KSPM), the SIFT Sparse-coded Spatial Pyramid Matching (Sc-SPM), the Kernel Codebook (KC) and the LBP

    Bag-of-Features Image Indexing and Classification in Microsoft SQL Server Relational Database

    Full text link
    This paper presents a novel relational database architecture aimed to visual objects classification and retrieval. The framework is based on the bag-of-features image representation model combined with the Support Vector Machine classification and is integrated in a Microsoft SQL Server database.Comment: 2015 IEEE 2nd International Conference on Cybernetics (CYBCONF), Gdynia, Poland, 24-26 June 201
    corecore