oaioai:dspace.spbu.ru:11701/8554

FactRuEval 2016: Evaluation of Named Entity Recognition and Fact Extraction Systems for Russian

Abstract

In this paper, we describe the rules and results of the FactRuEval informa- tion extraction competition held in 2016 as part of the Dialogue Evaluation initiative in the run-up to Dialogue 2016. The systems were to extract in- formation from Russian texts and competed in two named entity extraction tracks and one fact extraction track. The paper describes the tasks set be- fore the participants and presents the scores achieved by the contending systems. Additionally, we dwell upon the scoring methods employed for evaluating the results of all the three tracks and provide some preliminary analysis of the state of the art in Information Extraction for Russian texts. We also provide a detailed description of the composition and general orga- nization of the annotated corpus created for the competition by volunteers using the OpenCorpora.org platform. The corpus is publicly available and is expected to evolve in the future

Similar works

Full text

thumbnail-image

Saint Petersburg State University

Provided a free PDF
oaioai:dspace.spbu.ru:11701/8554Last time updated on 7/9/2019View original full text link

This paper was published in Saint Petersburg State University.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.