Skip to main content
Article thumbnail
Location of Repository

A Maximum-Entropy-Inspired Parser

By Eugene Charniak

Abstract

We present a new parser for parsing down to Penn tree-bank style parse trees that achieves 90.1% average precision/recall for sentences of length 40 and less, and 89.5% for sentences of length 100 and less when trained and tested on the previously established [5,9,10,15,17] \standard " sections of the Wall Street Journal treebank. This represents a 13% decrease in error rate over the best single-parser results on this corpus [9]. The major technical innovation is the use of a \maximum-entropy-inspired" model for conditioning and smoothing that let us successfully to test and combine many dierent conditioning events. We also present some partial results showing the eects of dierent conditioning information, including a surprising 2% improvement due to guessing the lexical head's pre-terminal before guessing the lexical head. 1 Introduction We present a new parser for parsing down to Penn tree-bank style parse trees [16] that achieves 90.1% average precision/recall for sentences of ..

Year: 1999
OAI identifier: oai:CiteSeerX.psu:10.1.1.32.5549
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://www.cs.brown.edu/people... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.