1 research outputs found

    Improved extraction assessment through better language models

    No full text
    A variety of information extraction techniques rely on the fact that instances of the same relation are “distributionally similar, ” in that they tend to appear in similar textual contexts. We demonstrate that extraction accuracy depends heavily on the accuracy of the language model utilized to estimate distributional similarity. An unsupervised model selection technique based on this observation is shown to reduce extraction and type-checking error by 26 % over previous results, in experiments with Hidden Markov Models. The results suggest that optimizing statistical language models over unlabeled data is a promising direction for improving weakly supervised and unsupervised information extraction.
    corecore