The Morphosyntactic Parser: Developing and testing a sentence processor that uses underspecified morphosyntactic features

Abstract

This dissertation presents a fundamentally new approach to describe not only the architecture of the language system but also the processes behind its capability to predict, analyze and integrate linguistic input into its representation in a parsimonious way. By the example of morphosyntax, underspecified case, the use of decomposed, binary case, number and gender features to account for syncretism, will offer insights into both: Carrying over this idea to language processing raises the question whether the language system—limited in its storage capacity—makes use of similar means of representational parsimony during the processing of linguistic input. This thesis will propose a processing system that is tightly related to the aforementioned architectural assumptions of morphosyntactically underspecified lexical entries as a parsimonious way of representation. In that sense, prediction is viewed as the language system’s drive to avoid feature deviance from one incrementally available linguistic element to another subsequentially incoming one. In this way, the parser’s goal is to maintain minimal feature deviance or at best feature identity to keep processing load as low as possible. This approach allows for position-dependent hypothesis with regard to the expected processing load. To test the processor’s claims, the electrophysiological data of a series of event-related brain potential (ERP) experiments will be presented. The results suggest that with the input’s increased feature deviance the amplitude of an ERP component sensitive for prediction error increases. In comparison to that, elements that rather maintain feature identity and that do not lack or introduce additional features to the analysis do not increase processing difficulty. These results indicate that the language processing system uses the available features of morphosyntactically underspecified mental entries to build up larger constituents. The experiments showed, that this buildup process is determined by the language system’s drive to avoid feature deviance

    Similar works