10,266 research outputs found

    Consistency and Variation in Kernel Neural Ranking Model

    Full text link
    This paper studies the consistency of the kernel-based neural ranking model K-NRM, a recent state-of-the-art neural IR model, which is important for reproducible research and deployment in the industry. We find that K-NRM has low variance on relevance-based metrics across experimental trials. In spite of this low variance in overall performance, different trials produce different document rankings for individual queries. The main source of variance in our experiments was found to be different latent matching patterns captured by K-NRM. In the IR-customized word embeddings learned by K-NRM, the query-document word pairs follow two different matching patterns that are equally effective, but align word pairs differently in the embedding space. The different latent matching patterns enable a simple yet effective approach to construct ensemble rankers, which improve K-NRM's effectiveness and generalization abilities.Comment: 4 pages, 4 figures, 2 table

    Query-Level Stability of Ranking SVM for Replacement Case

    Get PDF
    AbstractThe quality of ranking determines the success or failure of information retrieval and the goal of ranking is to learn a real-valued ranking function that induces a ranking or ordering over an instance space. We focus on stability and generalization ability of ranking SVM for replacement case. The query-level stability of ranking SVM for replacement case and the generalization bounds for such ranking algorithm via query-level stability by changing one element in sample set are given
    • …
    corecore