3,438 research outputs found

    Is linguistics a part of psychology?

    Get PDF
    Noam Chomsky, the founding father of generative grammar and the instigator of some of its core research programs, claims that linguistics is a part of psychology, concerned with a class of cognitive structures employed in speaking and understanding. In a recent book, Ignorance of Language, Michael Devitt has challenged certain core aspects of linguistics, as prominent practitioners of the science conceive of it. Among Devitt’s major conclusions is that linguistics is not a part of psychology. In this thesis I defend Chomsky’s psychological conception of grammatical theory. My case for the psychological conception involves defending a set of psychological goals for generative grammars, centring on conditions of descriptive and explanatory adequacy. I argue that generative grammar makes an explanatory commitment to a distinction between a psychological system of grammatical competence and the performance systems engaged in putting that competence to use. I then defend the view that this distinction can be investigated by probing speakers’ linguistic intuitions. Building on the psychological goals of generative grammar and its explanatory commitment to a psychological theory of grammatical competence, I argue that generative grammar neither targets nor presupposes non-psychological grammatical properties. The latter nonpsychological properties are dispensable to grammarians’ explanations because their explanatory goals can be met by the theory of grammatical competence to which they are committed. So generative grammars have psychological properties as their subject matter and linguistics is a part of psychology

    A journey through learner language: tracking development using POS tag sequences in large-scale learner data

    Get PDF
    This PhD study comes at a cross-roads of SLA studies and corpus linguistics methodology, using a bottom-up data-first approach to throw light on second language development. Taking POS tag n-gram sequences as a starting point, searching the data from the outermost syntactic layer available in corpus tools, it is an investigation of grammatical development in learner language across the six proficiency levels in the 52-million-word CEFR-benchmarked quasi-longitudinal Cambridge Learner Corpus. It takes a mixed methods approach, first examining the frequency and distribution of POS tag sequences by level, identifying convergence and divergence, and secondly looking qualitatively at form-meaning mappings of sequences at differing levels. It seeks to observe if there are sequences which characterise levels and which might index the transition between levels. It investigates sequence use at a lexical and functional level and explores whether this can contribute to our understanding of how a generic repertoire of learner language develops. It aims to contribute to the theoretical debate by looking critically at how current theories of language development and description might account for learner language development. It responds to the call to look at largescale learner data, and benefits from privileged access to such longitudinal data, acknowledging the limitations of any corpus data and the need to triangulate across different datasets. It seeks to illustrate how L2 language use converges and diverges across proficiency levels and to investigate convergence and divergence between L1 and L2 usage.N

    Words: structure, meaning, acquisition, processing

    Get PDF
    By bringing together experts of various scientific domains and different theoretical inclinations, the second NetWordS Summer school contributed to advance the current awareness of theoretical, typological, psycholinguistic, computational and neurophysiological evidence on the structure and processing of words, with a view to fostering novel methods of research and assessment for grammar architecture and language physiology

    Harmony, Head Proximity, and the Near Parallels between Nominal and Clausal Linkers

    Get PDF
    This paper puts forward a notion of harmonic word order that leads to a new generalisation over the presence or absence of disharmony: specific functional heads must cross-linguistically obey this notion of harmony absolutely, while for other categories the presence of harmony is simply a tendency. The difference between the two classes is defined by semantics. This approach allows us both to draw certain parallels between restrictions on word order in nominals and in clauses, and furthermore to explain why other expected parallels should fail to be realised completely, specifically as regards differences in the distribution of relative clauses in the NP and complement clauses in the sentence. Syntactically independent relative clause markers and subordinating complementisers share a striking restriction as regards ordering: relative clause markers are always initial in postnominal relative clauses, and final in prenominal relative clauses (Andrews 1975; Downing 1978; Lehmann 1984; Keenan 1985; De Vries 2002, 2005); similarly, initial subordinating Cs only appear in postverbal complement clauses, while final subordinating Cs are only possible where the complement clause is preverbal (Bayer 1996, 1997, 1999; Kayne 2000). In this paper, I provide new evidence from eighty genetically and geographically diverse languages of a third category sharing precisely the same restriction: linkers in the complex NP. These are syntactically independent, semantically vacuous heads, serving to mark the presence of a relationship between a noun and any kind of phrasal dependent (Rubin 2002; Den Dikken and Singhapreecha 2004; Philip 2009). The class of linkers in the NP therefore includes the ezafe in Indo-Iranian, the associative marker -a in Bantu, as well as purely functional adpositions such as of in English. Like relative clause markers and subordinating Cs, the linker always intervenes linearly between the superordinate head (the noun) and the subordinate dependent. Crucially, relative clause markers, subordinating Cs, and linkers in the NP form a natural class: they are syntactically independent, semantically vacuous words serving purely to mark the presence of a relationship between head and dependent. Any member of this class is a ‘linker’. I propose a theory of disharmony whereby linearisation rules targeting heads with specified semantics can require such heads to appear in a prominent position, either initial or final, irrespective of the general headedness of the language. Linkers, being semantically vacuous, are of course impervious to such rules; they will therefore always conform to the harmonic, or optimal, word order. I propose a theory of harmony whereby the optimal word order is determined by the interaction of three independently motivated harmonic word order constraints: Head Proximity (adapted from Rijkhoff 1984, 1986, cf. Head-Final Filter, Williams 1982), the preference for uniformity in headedness (initial or final), and the preference for clausal dependents to appear in final position (Dryer 1980, 1992). Where the three constraints compete, it is always Head Proximity that takes precedence. I show that the distribution of all three types of linker is fully captured by this proposal. Moreover, this theory of ordering also accounts for another well observed near parallel between clauses and nominals, as well as its exceptions. This concerns a left-right asymmetry in the distribution of clausal dependents: while in OV languages complement clauses appear with near equal frequency in both preverbal and postverbal position, in VO languages they are found uniquely in postverbal position (Dryer 1980; Hawkins 1994; Dryer 2009); similarly, in OV languages relative clauses are distributed relatively evenly between prenominal and postnominal position, whereas in VO languages they are almost always postnominal, with very few exceptions (Mallinson & Blake 1981; Hawkins 1983, 1990; Lehmann 1984; Keenan 1985; Dryer 1992, 2007, 2008; De Vries 2005). The theory predicts these exceptions to be permitted only in languages that are rigidly N-final. Hawkins’ (1983) Noun Modifier Hierarchy suggests that this prediction is borne out; apparent exceptions (cf. Dryer 2008) are found underlyingly to be N-final

    Guidance for awarding institutions on teacher roles and initial teaching qualifications: units of assessment for additional diplomas

    Get PDF
    Contents: Literacy, ESOL & Numerac

    The emergence of word-internal repetition through iterated learning:Explaining the mismatch between learning biases and language design

    Get PDF
    The idea that natural language is shaped by biases in learning plays a key role in our understanding of how human language is structured, but its corollary that there should be a correspondence between typological generalisations and ease of acquisition is not always supported. For example, natural languages tend to avoid close repetitions of consonants within a word, but developmental evidence suggests that, if anything, words containing sound repetitions are more, not less, likely to be acquired than those without. In this study, we use word-internal repetition as a test case to provide a cultural evolutionary explanation of when and how learning biases impact on language design. Two artificial language experiments showed that adult speakers possess a bias for both consonant and vowel repetitions when learning novel words, but the effects of this bias were observable in language transmission only when there was a relatively high learning pressure on the lexicon. Based on these results, we argue that whether the design of a language reflects biases in learning depends on the relative strength of pressures from learnability and communication efficiency exerted on the linguistic system during cultural transmission
    • 

    corecore