45,393 research outputs found

    Minimal k-rankings and the a-rank number of a path

    Get PDF
    Given a graph G, a function f: V(G) -\u3e {1, 2, ..., k} is a k-ranking of G if f(u) = f(v) implies every u - v path contains a vertex w such that f(w) \u3e f(u). A k-ranking is minimal if the reduction of any label greater than 1 violates the described ranking property. The a-rank number of G, denoted u,(G) equals the largest k such that G has a minimal k-ranking. We establish new results involving minimal rankings of paths and in particular we determine u(Pn), a problem suggested by Laskar and Pillone in 2000. We show u(Pn) = [log2 (n + 1)] + [log2(n + 1 - (2^([log2n]-1))] (Refer to PDF file for exact formulas)

    Pilodyn Evaluation of Treated Waferboard in Field Exposure

    Get PDF
    Samples of preservative-treated aspen waferboard exposed outdoors for 30 months were compared using pin penetrations of the 6-Joule Pilodyn. These results correlated well with rankings of treatment performance based on more laborious standard mechanical tests, and demonstrate the potential for use of the Pilodyn as a tool to evaluate wood composites in test exposures with minimal destruction

    Pytrec_eval: An Extremely Fast Python Interface to trec_eval

    Full text link
    We introduce pytrec_eval, a Python interface to the tree_eval information retrieval evaluation toolkit. pytrec_eval exposes the reference implementations of trec_eval within Python as a native extension. We show that pytrec_eval is around one order of magnitude faster than invoking trec_eval as a sub process from within Python. Compared to a native Python implementation of NDCG, pytrec_eval is twice as fast for practically-sized rankings. Finally, we demonstrate its effectiveness in an application where pytrec_eval is combined with Pyndri and the OpenAI Gym where query expansion is learned using Q-learning.Comment: SIGIR '18. The 41st International ACM SIGIR Conference on Research & Development in Information Retrieva

    Fair Ranking System

    Get PDF
    Ranking greatly assists us in decision making by allowing us to consider numerous options, with a variety of attributes and turn it into an understandable model. Yet, accurate rankings can be difficult for one to construct without extensive and proper knowledge on what you’re ranking. Thus arose the need for fair ranking systems that effectively elicit data from users and produce accurate rankings that satisfy and serve the users’ needs. In the pursuit of creating a fair ranking system this team proposes Rankit_Experimenter, a user study that tests the merits of three preference collection methods. Results indicate that categorical binning is the best value model: providing a substantial amount of data for the underlying ranking algorithm, while requiring minimal effort from the user
    • …
    corecore