11 research outputs found

    Object Feature Extraction of Songket Image Using Chain Code Algorithm

    Get PDF
    The study was aimed at determining the feature of a motif found in a Songket image in order to make the object detectable and readable. The method used was image color segmentation in the form of a process of segmentation of the image area based on the similarity in colors, which was continued with the binary process that aims to change the image into binary form (0 and 1), so that it only has two colors namely black and white. This study also used mathematical morphology in detecting objects, by using dilation operation and filling holes. After the process of mathematical morphology was completed, the next process was motif extraction by applying moore contour tracking algorithms and the development of chain code algorithms. The results of the process carried out showed that the development chain code algorithm can generate the number of objects, the length of chain code, and probable value of rate of appearances of each chain code in a motif, despite there are some objects in a motif. Then the values are stored into the database as The Feature of Songket Motifs

    Improving Stemming Algorithm Using Morphological Rules

    Get PDF
    Stemming words to remove suffixes has applications in text search, translation machine, summarization document, and text classification. For example, Indonesian stemming reduces the words “kebaikan”, “perbaikan”, “memperbaiki” and “sebaik-baiknya” to their common morphological root “baik”. In text search, this permits a search for player to find documents containing all words with the stem play. In the Indonesian language, stemming is of crucial importance: words have prefixes, suffixes, infixes, and confixes that make them match to relate difficult words. This research proposed a stemmer with more accurate word results by employing algorithm which gave more than one word candidate results and more than one affix combinations. New stemming algorithm is called CAT stemming algorithm. Here, the word results did not depend on the order of the morphological rule. All rules were checked and the word results were kept in a candidate list. To make an efficient stemmer, two kinds of word lists (vocabularies) were used: words that had more than one candidate words and list of root word as a candidate reference. The final word results were selected with several rules. This strategy was proved to have better result than the two most known about Indonesian stemmers. The experiments showed that the proposed approach gave higher accuracy than the compared systems known

    RGB-NIR image categorization with prior knowledge transfer

    Full text link
    Abstract Recent development on image categorization, especially scene categorization, shows that the combination of standard visible RGB image data and near-infrared (NIR) image data performs better than RGB-only image data. However, the size of RGB-NIR image collection is often limited due to the difficulty of acquisition. With limited data, it is difficult to extract effective features using the common deep learning networks. It is observed that humans are able to learn prior knowledge from other tasks or a good mentor, which is helpful to solve the learning problems with limited training samples. Inspired by this observation, we propose a novel training methodology for introducing the prior knowledge into a deep architecture, which allows us to bypass the burdensome labeling large quantity of image data to meet the big data requirements in deep learning. At first, transfer learning is adopted to learn single modal features from a large source database, such as ImageNet. Then, a knowledge distillation method is explored to fuse the RGB and NIR features. Finally, a global optimization method is employed to fine-tune the entire network. The experimental results on two RGB-NIR datasets demonstrate the effectiveness of our proposed approach in comparison with the state-of-the-art multi-modal image categorization methods.https://deepblue.lib.umich.edu/bitstream/2027.42/146762/1/13640_2018_Article_388.pd

    Deep InterBoost networks for small-sample image classification

    Get PDF
    Deep neural networks have recently shown excellent performance on numerous image classification tasks. These networks often need to estimate a large number of parameters and require much training data. When the amount of training data is small, however, a network with high flexibility quickly overfits the training data, resulting in a large model variance and poor generalization. To address this problem, we propose a new, simple yet effective ensemble method called InterBoost for small-sample image classification. In the training phase, InterBoost first randomly generates two sets of complementary weights for training data, which are used for separately training two base networks of the same structure, and then the two sets of complementary weights are updated for refining the training of the networks through interaction between the two base networks previously trained. This interactive training process continues iteratively until a stop criterion is met. In the testing phase, the outputs of the two networks are combined to obtain one final score for classification. Experimental results on four small-sample datasets, UIUC-Sports, LabelMe, 15Scenes and Caltech101, demonstrate that the proposed ensemble method outperforms existing ones. Moreover, results from the Wilcoxon signed-rank tests show that our method is statistically significantly better than the methods compared. Detailed analysis is also provided for an in-depth understanding of the proposed method
    corecore