1,530 research outputs found

    VITALAS at TRECVID-2008

    Get PDF
    In this paper, we present our experiments in TRECVID 2008 about High-Level feature extraction task. This is the first year for our participation in TRECVID, our system adopts some popular approaches that other workgroups proposed before. We proposed 2 advanced low-level features NEW Gabor texture descriptor and the Compact-SIFT Codeword histogram. Our system applied well-known LIBSVM to train the SVM classifier for the basic classifier. In fusion step, some methods were employed such as the Voting, SVM-base, HCRF and Bootstrap Average AdaBoost(BAAB)

    A novel fusion approach in the extraction of kernel descriptor with improved effectiveness and efficiency

    Get PDF
    Image representation using feature descriptors is crucial. A number of histogram-based descriptors are widely used for this purpose. However, histogram-based descriptors have certain limitations and kernel descriptors (KDES) are proven to overcome them. Moreover, the combination of more than one KDES performs better than an individual KDES. Conventionally, KDES fusion is performed by concatenating them after the gradient, colour and shape descriptors have been extracted. This approach has limitations in regard to the efficiency as well as the effectiveness. In this paper, we propose a novel approach to fuse different image features before the descriptor extraction, resulting in a compact descriptor which is efficient and effective. In addition, we have investigated the effect on the proposed descriptor when texture-based features are fused along with the conventionally used features. Our proposed descriptor is examined on two publicly available image databases and shown to provide outstanding performances

    Histology Image Retrieval in Optimized Multifeature Spaces

    Get PDF

    Image retrieval based on colour and improved NMI texture features

    Get PDF
    This paper proposes an improved method for extracting NMI features. This method uses Particle Swarm Optimization in advance to optimize the two-dimensional maximum class-to-class variance (2OTSU) in advance. Afterwards, the optimized 2OUSU is introduced into the Pulse Coupled Neural Network (PCNN) to automatically obtain the number of iterations of the loop. We use an improved PCNN method to extract the NMI features of the image. For the problem of low accuracy of single feature, this paper proposes a new method of multi-feature fusion based on image retrieval. It uses HSV colour features and texture features, where, the texture feature extraction methods include: Grey Level Co-occurrence Matrix (GLCM), Local Binary Pattern (LBP) and Improved PCNN. The experimental results show that: on the Corel-1k dataset, compared with similar algorithms, the retrieval accuracy of this method is improved by 13.6%; On the AT&T dataset, the retrieval accuracy is improved by 13.4% compared with the similar algorithm; on the FD-XJ dataset, the retrieval accuracy is improved by 17.7% compared with the similar algorithm. Therefore, the proposed algorithm has better retrieval performance and robustness compared with the existing image retrieval algorithms based on multi-feature fusion

    Local and deep texture features for classification of natural and biomedical images

    Get PDF
    Developing efficient feature descriptors is very important in many computer vision applications including biomedical image analysis. In the past two decades and before the popularity of deep learning approaches in image classification, texture features proved to be very effective to capture the gradient variation in the image. Following the success of the Local Binary Pattern (LBP) descriptor, many variations of this descriptor were introduced to further improve the ability of obtaining good classification results. However, the problem of image classification gets more complicated when the number of images increases as well as the number of classes. In this case, more robust approaches must be used to address this problem. In this thesis, we address the problem of analyzing biomedical images by using a combination of local and deep features. First, we propose a novel descriptor that is based on the motif Peano scan concept called Joint Motif Labels (JML). After that, we combine the features extracted from the JML descriptor with two other descriptors called Rotation Invariant Co-occurrence among Local Binary Patterns (RIC-LBP) and Joint Adaptive Medina Binary Patterns (JAMBP). In addition, we construct another descriptor called Motif Patterns encoded by RIC-LBP and use it in our classification framework. We enrich the performance of our framework by combining these local descriptors with features extracted from a pre-trained deep network called VGG-19. Hence, the 4096 features of the Fully Connected 'fc7' layer are extracted and combined with the proposed local descriptors. Finally, we show that Random Forests (RF) classifier can be used to obtain superior performance in the field of biomedical image analysis. Testing was performed on two standard biomedical datasets and another three standard texture datasets. Results show that our framework can beat state-of-the-art accuracy on the biomedical image analysis and the combination of local features produce promising results on the standard texture datasets.Includes bibliographical reference
    corecore