Location of Repository

Learning grammars for different parsing tasks by partition search

By Anja Belz


This paper describes a comparative application of Grammar Learning by Partition Search to four different learning tasks: deep parsing, NP identification, flat phrase chunking and NP chunking. In the experiments, base grammars were extracted from a treebank corpus. From this starting point, new grammars optimised for the different parsing tasks were learnt by Partition Search. No lexical information was used. In half of the experiments, local structural context in the form of parent phrase category information was incorporated into the grammars. Results show that grammars which contain this information outperform grammars which do not by large margins in all tests for all parsing tasks. It makes the biggest difference for deep parsing, typically corresponding to an improvement of around 5%. Overall, Partition Search with parent phrase category information is shown to be a successful method for learning grammars optimised for a given parsing task, and for minimising grammar size. The biggest margin of improvement over a base grammar was a 5.4% increase in the F-Score for deep parsing. The biggest size reductions were 93.5% fewer nonterminals (for NP identification), and 31.3% fewer rules (for XP chunking

Topics: G400 Computing, Q100 Linguistics
Publisher: Association for Computational Linguistics
Year: 2002
DOI identifier: 10.3115/1072228.1072296
OAI identifier: oai:eprints.brighton.ac.uk:3098

Suggested articles


To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.