28 research outputs found

    The D-TUNA Corpus:A Dutch dataset for the evaluation of referring expression generation algorithms

    Get PDF

    Learning preferences for referring expression generation:Effects of domain, language and algorithm

    Get PDF
    One important subtask of Referring Expression Generation (REG) algorithms is to select the attributes in a definite description for a given object. In this paper, we study how much training data is required for algorithms to do this properly. We compare two REG algorithms in terms of their performance: the classic Incremental Algorithm and the more recent Graph algorithm. Both rely on a notion of preferred attributes that can be learned from human descriptions. In our experiments, preferences are learned from training sets that vary in size, in two domains and languages. The results show that depending on the algorithm and the complexity of the domain, training on a handful of descriptions can already lead to a performance that is not significantly different from training on a much larger data set

    Need I say more?:On overspecification in definite reference

    Get PDF

    The D-TUNA Corpus: A Dutch dataset for the evaluation of referring expression generation algorithms

    No full text
    corecore