4 research outputs found

    Pengkategorian Otomatis Artikel Ilmiah dalam Pangkalan Data Perpustakaan Digital Menggunakan Metode Kernel Graph

    Full text link
    Artikel ilmiah dalam pangkalan data perpustakaan digital dikelompokkan dalam kategori-kategori tertentu. Pengelompokan artikel ilmiah dalam jumlah besar yang dilakukan secara manual membutuhkan sumber daya manusia yang banyak dan waktu yang tidak singkat. Penelitian ini bertujuan untuk membantu tim pengolah bahan pustaka dalam mengelompokkan artikel ilmiah sesuai dengan kategorinya masing-masing secara otomatis. Dalam penelitian ini, pengkategorian otomatis artikel ilmiah dilakukan dengan menggunakan kernel graph yang diterapkan pada graph bipartite antara dokumen artikel ilmiah dengan kata kuncinya. Lima fungsi kernel digunakan untuk menghitung nilai matriks kernel, yaitu KEGauss, KELinear, KVGauss, KVLinear dan KRW. Matriks kernel dihitung dari proyeksi satu-moda graph bipartit, lalu digunakan sebagai masukan pengklasifikasi SVM (support vector machine) dalam menentukan kategori yang tepat. Kinerja pengkategorian otomatis dihitung dari ketepatan yang merupakan perbandingan antara jumlah artikel yang dikategorikan secara tepat dengan jumlah keseluruhan artikel dalam dataset. Penerapan metode ini dalam pangkalan data ISJD (Indonesian Scientific Journal Database) menghasilkan rata-rata ketepatan yang signifikan yaitu 87,43% untuk fungsi kernel KVGauss. Sedangkan kernel lainnya memberikan hasil berturut-turut 86,14% (KELinear), 85,86% (KEGauss), 42,23% (KVLinear dan 25,15% (KRW). Hasil ini menunjukkan bahwa penggunaan metode kernel graf efektif untuk mengelompokkan artikel ilmiah ke dalam kategori yang ditentukan dalam pangkalan data perpustakaan digital

    Semantics-based language models for information retrieval and text mining

    Get PDF
    The language modeling approach centers on the issue of estimating an accurate model by choosing appropriate language models as well as smoothing techniques. In the thesis, we propose a novel context-sensitive semantic smoothing method referred to as a topic signature language model. It extracts explicit topic signatures from a document and then statistically maps them into individual words in the vocabulary. In order to support the new language model, we developed two automated algorithms to extract multiword phrases and ontological concepts, respectively, and an EM-based algorithm to learn semantic mapping knowledge from co-occurrence data. The topic signature language model is applied to three applications: information retrieval, text classification, and text clustering. The evaluations on news collection and biomedical literature prove the effectiveness of the topic signature language model.In the experiment of information retrieval, the topic signature language model consistently outperforms the baseline two-stage language model as well as the context-insensitive semantic smoothing method in all configurations. It also beats the state-of-the-art Okapi models in all configurations. In the experiment of text classification, when the size of training documents is small, the Bayesian classifier with semantic smoothing not only outperforms the classifiers with background smoothing and Laplace smoothing, but it also beats the active learning classifiers and SVM classifiers. On the task of clustering, whether or not the dataset to cluster is small, the model-based k-means with semantic smoothing performs significantly better than both the model-based k-means with background smoothing and Laplace smoothing. It is also superior to the spherical k-means in terms of effectiveness.In addition, we empirically prove that, within the framework of topic signature language models, the semantic knowledge learned from one collection could be effectively applied to other collections. In the thesis, we also compare three types of topic signatures (i.e., words, multiword phrases, and ontological concepts), with respect to their effectiveness and efficiency for semantic smoothing. In general, it is more expensive to extract multiword phrases and ontological concepts than individual words, but semantic mapping based on multiword phrases and ontological concepts are more effective in handling data sparsity than on individual words.Ph.D., Information Science and Technology -- Drexel University, 200
    corecore