130 research outputs found

    Color Image Clustering using Block Truncation Algorithm

    Get PDF
    With the advancement in image capturing device, the image data been generated at high volume. If images are analyzed properly, they can reveal useful information to the human users. Content based image retrieval address the problem of retrieving images relevant to the user needs from image databases on the basis of low-level visual features that can be derived from the images. Grouping images into meaningful categories to reveal useful information is a challenging and important problem. Clustering is a data mining technique to group a set of unsupervised data based on the conceptual clustering principal: maximizing the intraclass similarity and minimizing the interclass similarity. Proposed framework focuses on color as feature. Color Moment and Block Truncation Coding (BTC) are used to extract features for image dataset. Experimental study using K-Means clustering algorithm is conducted to group the image dataset into various clusters

    Evolutionary Clustering in Indonesian Ethnic Textile Motifs \ud

    Get PDF
    The wide varieties of Indonesian textiles could reflect the varsity that has been living with the diversity of Indonesian ethnic groups. Meme as an evolutionary modeling technique promises some conjectures to capture the innovative process of the cultural objects production in the particular collective patterns acquainted can be regarded as fitness in the large evolutionary landscape of cultural life. We have presented the correlations between memeplexes that is transformed into distances has generated the phylomemetic tree, both among some samples from Indonesian textile handicrafts and batik, the designs that have been living through generations with Javanese people, the largest ethnic group in Indonesian archipelago. The memeplexes is extracted from the geometrical shape, i.e.: fractal dimensions and the histogram analysis of the employed colorization. We draw some interesting findings from the tree and open the future anthropological development that might catch the attention further observation

    Random convolution ensembles

    Get PDF
    A novel method for creating diverse ensembles of image classifiers is proposed. The idea is that, for each base image classifier in the ensemble, a random image transformation is generated and applied to all of the images in the labeled training set. The base classifiers are then learned using features extracted from these randomly transformed versions of the training data, and the result is a highly diverse ensemble of image classifiers. This approach is evaluated on a benchmark pedestrian detection dataset and shown to be effective

    Autoencoding the Retrieval Relevance of Medical Images

    Full text link
    Content-based image retrieval (CBIR) of medical images is a crucial task that can contribute to a more reliable diagnosis if applied to big data. Recent advances in feature extraction and classification have enormously improved CBIR results for digital images. However, considering the increasing accessibility of big data in medical imaging, we are still in need of reducing both memory requirements and computational expenses of image retrieval systems. This work proposes to exclude the features of image blocks that exhibit a low encoding error when learned by a n/p/nn/p/n autoencoder (p ⁣< ⁣np\!<\!n). We examine the histogram of autoendcoding errors of image blocks for each image class to facilitate the decision which image regions, or roughly what percentage of an image perhaps, shall be declared relevant for the retrieval task. This leads to reduction of feature dimensionality and speeds up the retrieval process. To validate the proposed scheme, we employ local binary patterns (LBP) and support vector machines (SVM) which are both well-established approaches in CBIR research community. As well, we use IRMA dataset with 14,410 x-ray images as test data. The results show that the dimensionality of annotated feature vectors can be reduced by up to 50% resulting in speedups greater than 27% at expense of less than 1% decrease in the accuracy of retrieval when validating the precision and recall of the top 20 hits.Comment: To appear in proceedings of The 5th International Conference on Image Processing Theory, Tools and Applications (IPTA'15), Nov 10-13, 2015, Orleans, Franc

    Information Quality in Web-Based eCatalogue

    Get PDF
    Catalogues are important business strategy as they can provide customers with product descriptions and assist who have buying interest to not go through the floor areas and shelves, browsing aimlessly, trying to locate items that are of interest. Printed catalogue are cumbersome to use, require large storage areas, become dated soon after publication, and make search and comparison activities very difficult.The situation is further worsen when the quality of information provided is not regularly updated and is below customers’ expectations. eCatalogue has the potential to offer assistance to customer, and improve information quality.Therefore, an eCatalouge was developed in this study where 30 potential customers tried the proposed eCatalogue for a certain period. Nine information quality dimensions, which are Accuracy, Precision, Currency, Timeliness, Reliability,Completeness, Conciseness, Format, and Relevance, were used to measure the eCatalouge. Based on a three point scale (where 1= disagree and 3= agree),respondents agree that the information in the eCatalouge are somewhat current(mean =2.27), precise (2.20), accurate (2.17), reliable (2.17), and concise(2.17). However, they are not sure about the timely (2.00) and relevant (2.07) dimensions.Also they agree to some extent, the eCatalogue format is satisfying (2.20). Overall mean of quality measure is (2.15), which is indicates that the quality of information in the developed eCatalogue should be improved

    Asymmetric bagging and random subspace for support vector machines-based relevance feedback in image retrieval

    Get PDF
    Relevance feedback schemes based on support vector machines (SVM) have been widely used in content-based image retrieval (CBIR). However, the performance of SVM-based relevance feedback is often poor when the number of labeled positive feedback samples is small. This is mainly due to three reasons: 1) an SVM classifier is unstable on a small-sized training set, 2) SVM's optimal hyperplane may be biased when the positive feedback samples are much less than the negative feedback samples, and 3) overfitting happens because the number of feature dimensions is much higher than the size of the training set. In this paper, we develop a mechanism to overcome these problems. To address the first two problems, we propose an asymmetric bagging-based SVM (AB-SVM). For the third problem, we combine the random subspace method and SVM for relevance feedback, which is named random subspace SVM (RS-SVM). Finally, by integrating AB-SVM and RS-SVM, an asymmetric bagging and random subspace SVM (ABRS-SVM) is built to solve these three problems and further improve the relevance feedback performance
    corecore