1 research outputs found

    Towards the Evaluation of the LarKC Reasoner Plug-ins

    No full text
    Abstract. In this paper, we present an initial framework of evaluation and benchmarking of reasoners deployed within the LarKC platform, a platform for massive distributed incomplete reasoning that will remove the scalability barriers of currently existing reasoning systems for the Semantic Web. We discuss the evaluation methods, measures, benchmarks, and performance targets for the plug-ins to be developed for approximate reasoning with interleaved reasoning and selection. In this paper, we propose a specification language of gold standards for the evaluation and benchmarking, and discuss how it can be used for the evaluation of reasoner plug-ins within the LarKC platform
    corecore