11,925 research outputs found
Phonological planning during sentence production: beyond the verb
The current study addresses the extent of phonological planning during spontaneous sentence production. Previous work shows that at articulation, phonological encoding occurs for entire phrases, but encoding beyond the initial phrase may be due to the syntactic relevance of the verb in planning the utterance. I conducted three experiments to investigate whether phonological planning crosses multiple grammatical phrase boundaries (as defined by the number of lexical heads of phrase) within a single phonological phrase. Using the picture-word interference paradigm, I found in two separate experiments a significant phonological facilitation effect to both the verb and noun of sentences like “He opens the gate.” I also altered the frequency of the direct object and found longer utterance initiation times for sentences ending with a low-frequency vs. high-frequency object offering further support that the direct object was phonologically encoded at the time of utterance initiation. That phonological information for post-verbal elements was activated suggests that the grammatical importance of the verb does not restrict the extent of phonological planning. These results suggest that the phonological phrase is unit of planning, where all elements within a phonological phrase are encoded before articulation. Thus, consistent with other action sequencing behavior, there is significant phonological planning ahead in sentence production
On becoming a physicist of mind
In 1976, the German Max Planck Society established a new research enterprise in psycholinguistics, which became the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands. I was fortunate enough to be invited to direct this institute. It enabled me, with my background in visual and auditory psychophysics and the theory of formal grammars and automata, to develop a long-term chronometric endeavor to dissect the process of speaking. It led, among other work, to my book Speaking (1989) and to my research team's article in Brain and Behavioral Sciences “A Theory of Lexical Access in Speech Production” (1999). When I later became president of the Royal Netherlands Academy of Arts and Sciences, I helped initiate the Women for Science research project of the Inter Academy Council, a project chaired by my physicist sister at the National Institute of Standards and Technology. As an emeritus I published a comprehensive History of Psycholinguistics (2013). As will become clear, many people inspired and joined me in these undertakings
Where is the length effect? A cross-linguistic study.
Many models of speech production assume that one cannot begin to articulate a word before all its segmental units are inserted into the articulatory plan. Moreover, some of these models assume that segments are serially inserted from left to right. As a consequence, latencies to name words should increase with word length. In a series of five experiments, however, we showed that the time to name a picture or retrieve a word associated with a symbol is not affected by the length of the word. Experiments 1 and 2 used French materials and participants, while Experiments 3, 4 and 5 were conducted with English materials and participants. These results are discussed in relation to current models of speech production, and previous reports of length effects are reevaluated in light of these findings. We conclude that if words are encoded serially, then articulation can start before an entire phonological word has been encoded
Recommended from our members
What can co-speech gestures in aphasia tell us about the relationship between language and gesture?: A single case study of a participant with Conduction Aphasia
Cross-linguistic evidence suggests that language typology influences how people gesture when using ‘manner-of-motion’ verbs (Kita 2000; Kita & Özyürek 2003) and that this is due to ‘online’ lexical and syntactic choices made at the time of speaking (Kita, Özyürek, Allen, Brown, Furman & Ishizuka, 2007). This paper attempts to relate these findings to the co-speech iconic gesture used by an English speaker with conduction aphasia (LT) and five controls describing a Sylvester and Tweety1 cartoon. LT produced co-speech gesture which showed distinct patterns which we relate to different aspects of her language impairment, and the lexical and syntactic choices she made during her narrative
The influence of semantic and phonological factors on syntactic decisions: An event-related brain potential study
During language production and comprehension, information about a word's syntactic properties is sometimes needed. While the decision about the grammatical gender of a word requires access to syntactic knowledge, it has also been hypothesized that semantic (i.e., biological gender) or phonological information (i.e., sound regularities) may influence this decision. Event-related potentials (ERPs) were measured while native speakers of German processed written words that were or were not semantically and/or phonologically marked for gender. Behavioral and ERP results showed that participants were faster in making a gender decision when words were semantically and/or phonologically gender marked than when this was not the case, although the phonological effects were less clear. In conclusion, our data provide evidence that even though participants performed a grammatical gender decision, this task can be influenced by semantic and phonological factors
On the automaticity of language processing
People speak and listen to language all the time. Given this high frequency of use, it is often suggested that at least some aspects of language processing are highly overlearned and therefore occur “automatically”. Here we critically examine this suggestion. We first sketch a framework that views automaticity as a set of interrelated features of mental processes and a matter of degree rather than a single feature that is all-or-none. We then apply this framework to language processing. To do so, we carve up the processes involved in language use according to (a) whether language processing takes place in monologue or dialogue, (b) whether the individual is comprehending or producing language, (c) whether the spoken or written modality is used, and (d) the linguistic processing level at which they occur, that is, phonology, the lexicon, syntax, or conceptual processes. This exercise suggests that while conceptual processes are relatively non-automatic (as is usually assumed), there is also considerable evidence that syntactic and lexical lower-level processes are not fully automatic. We close by discussing entrenchment as a set of mechanisms underlying automatization
Recommended from our members
Is the scope of phonological planning constrained by the syntactical role of the utterance constituents?
Five experiments looked the effect of repeated phonemes in the production of color adjective+noun phrases in English ("green gun"), or noun+color adjective phrases in Spanish and French. Whereas phoneme repetition sped up naming latencies in the case of prenominal color adjectives, it induced inhibition in the postnominal case. We argue that these dissociation is not compatible with a genuine crosslinguistic difference in the scope of phonological encoding. Rather we explain it in terms of the interplay between an activation gradient, coding word order, and an activation bias, coding the syntactical role of the utterance constituents
ERP correlates of word production before and after stroke in an aphasic patient
No abstract available
- …