8,758 research outputs found

    Plant image retrieval using color, shape and texture features

    Get PDF
    We present a content-based image retrieval system for plant image retrieval, intended especially for the house plant identification problem. A plant image consists of a collection of overlapping leaves and possibly flowers, which makes the problem challenging.We studied the suitability of various well-known color, shape and texture features for this problem, as well as introducing some new texture matching techniques and shape features. Feature extraction is applied after segmenting the plant region from the background using the max-flow min-cut technique. Results on a database of 380 plant images belonging to 78 different types of plants show promise of the proposed new techniques and the overall system: in 55% of the queries, the correct plant image is retrieved among the top-15 results. Furthermore, the accuracy goes up to 73% when a 132-image subset of well-segmented plant images are considered

    Classification of ordered texture images using regression modelling and granulometric features

    Get PDF
    Structural information available from the granulometry of an image has been used widely in image texture analysis and classification. In this paper we present a method for classifying texture images which follow an intrinsic ordering of textures, using polynomial regression to express granulometric moments as a function of class label. Separate models are built for each individual moment and combined for back-prediction of the class label of a new image. The methodology was developed on synthetic images of evolving textures and tested using real images of 8 different grades of cut-tear-curl black tea leaves. For comparison, grey level co-occurrence (GLCM) based features were also computed, and both feature types were used in a range of classifiers including the regression approach. Experimental results demonstrate the superiority of the granulometric moments over GLCM-based features for classifying these tea images

    Measuring concept similarities in multimedia ontologies: analysis and evaluations

    Get PDF
    The recent development of large-scale multimedia concept ontologies has provided a new momentum for research in the semantic analysis of multimedia repositories. Different methods for generic concept detection have been extensively studied, but the question of how to exploit the structure of a multimedia ontology and existing inter-concept relations has not received similar attention. In this paper, we present a clustering-based method for modeling semantic concepts on low-level feature spaces and study the evaluation of the quality of such models with entropy-based methods. We cover a variety of methods for assessing the similarity of different concepts in a multimedia ontology. We study three ontologies and apply the proposed techniques in experiments involving the visual and semantic similarities, manual annotation of video, and concept detection. The results show that modeling inter-concept relations can provide a promising resource for many different application areas in semantic multimedia processing

    Covariate conscious approach for Gait recognition based upon Zernike moment invariants

    Full text link
    Gait recognition i.e. identification of an individual from his/her walking pattern is an emerging field. While existing gait recognition techniques perform satisfactorily in normal walking conditions, there performance tend to suffer drastically with variations in clothing and carrying conditions. In this work, we propose a novel covariate cognizant framework to deal with the presence of such covariates. We describe gait motion by forming a single 2D spatio-temporal template from video sequence, called Average Energy Silhouette image (AESI). Zernike moment invariants (ZMIs) are then computed to screen the parts of AESI infected with covariates. Following this, features are extracted from Spatial Distribution of Oriented Gradients (SDOGs) and novel Mean of Directional Pixels (MDPs) methods. The obtained features are fused together to form the final well-endowed feature set. Experimental evaluation of the proposed framework on three publicly available datasets i.e. CASIA dataset B, OU-ISIR Treadmill dataset B and USF Human-ID challenge dataset with recently published gait recognition approaches, prove its superior performance.Comment: 11 page

    CONTENT-BASED IMAGE RETRIEVAL USING ENHANCED HYBRID METHODS WITH COLOR AND TEXTURE FEATURES

    Get PDF
    Content-based image retrieval (CBIR) automatically retrieves similar images to the query image by using the visual contents (features) of the image like color, texture and shape. Effective CBIR is based on efficient feature extraction for indexing and on effective query image matching with the indexed images for retrieval. However the main issue in CBIR is that how to extract the features efficiently because the efficient features describe well the image and they are used efficiently in matching of the images to get robust retrieval. This issue is the main inspiration for this thesis to develop a hybrid CBIR with high performance in the spatial and frequency domains. We propose various approaches, in which different techniques are fused to extract the statistical color and texture features efficiently in both domains. In spatial domain, the statistical color histogram features are computed using the pixel distribution of the Laplacian filtered sharpened images based on the different quantization schemes. However color histogram does not provide the spatial information. The solution is by using the histogram refinement method in which the statistical features of the regions in histogram bins of the filtered image are extracted but it leads to high computational cost, which is reduced by dividing the image into the sub-blocks of different sizes, to extract the color and texture features. To improve further the performance, color and texture features are combined using sub-blocks due to the less computational cos
    corecore