7,154 research outputs found

    Intelligent Fusion of Structural and Citation-Based Evidence for Text Classification

    Get PDF
    This paper investigates how citation-based information and structural content (e.g., title, abstract) can be combined to improve classification of text documents into predefined categories. We evaluate different measures of similarity, five derived from the citation structure of the collection, and three measures derived from the structural content, and determine how they can be fused to improve classification effectiveness. To discover the best fusion framework, we apply Genetic Programming (GP) techniques. Our empirical experiments using documents from the ACM digital library and the ACM classification scheme show that we can discover similarity functions that work better than any evidence in isolation and whose combined performance through a simple majority voting is comparable to that of Support Vector Machine classifiers

    Personalization of Search Engine Services for Effective Retrieval and Knowledge Management

    Get PDF
    The Internet and corporate intranets provide far more information than anybody can absorb. People use search engines to find the information they require. However, these systems tend to use only one fixed term weighting strategy regardless of the context to which it applies, posing serious performance problems when characteristics of different users, queries, and text collections are taken into consideration. In this paper, we argue that the term weighting strategy should be context specific, that is, different term weighting strategies should be applied to different contexts, and we propose a new systematic approach that can automatically generate term weighting strategies for different contexts based on genetic programming (GP). The new proposed framework was tested on TREC data and the results are very promising

    A cross-benchmark comparison of 87 learning to rank methods

    Get PDF
    Learning to rank is an increasingly important scientific field that comprises the use of machine learning for the ranking task. New learning to rank methods are generally evaluated on benchmark test collections. However, comparison of learning to rank methods based on evaluation results is hindered by the absence of a standard set of evaluation benchmark collections. In this paper we propose a way to compare learning to rank methods based on a sparse set of evaluation results on a set of benchmark datasets. Our comparison methodology consists of two components: (1) Normalized Winning Number, which gives insight in the ranking accuracy of the learning to rank method, and (2) Ideal Winning Number, which gives insight in the degree of certainty concerning its ranking accuracy. Evaluation results of 87 learning to rank methods on 20 well-known benchmark datasets are collected through a structured literature search. ListNet, SmoothRank, FenchelRank, FSMRank, LRUF and LARF are Pareto optimal learning to rank methods in the Normalized Winning Number and Ideal Winning Number dimensions, listed in increasing order of Normalized Winning Number and decreasing order of Ideal Winning Number
    • …
    corecore