198 research outputs found
Self-, other-, and joint monitoring using forward models
In the psychology of language, most accounts of self-monitoring assume that it is based on comprehension. Here we outline and develop the alternative account proposed by Pickering and Garrod (2013), in which speakers construct forward models of their upcoming utterances and compare them with the utterance as they produce them. We propose that speakers compute inverse models derived from the discrepancy (error) between the utterance and the predicted utterance and use that to modify their production command or (occasionally) begin anew. We then propose that comprehenders monitor other people’s speech by simulating their utterances using covert imitation and forward models, and then comparing those forward models with what they hear. They use the discrepancy to compute inverse models and modify their representation of the speaker’s production command, or realize that their representation is incorrect and may develop a new production command. We then discuss monitoring in dialogue, paying attention to sequential contributions, concurrent feedback, and the relationship between monitoring and alignment
Referential and visual cues to structural choice in visually situated sentence production
We investigated how conceptually informative (referent preview) and conceptually uninformative (pointer to referent’s location) visual cues affect structural choice during production of English transitive sentences. Cueing the Agent or the Patient prior to presenting the target-event reliably predicted the likelihood of selecting this referent as the sentential Subject, triggering, correspondingly, the choice between active and passive voice. Importantly, there was no difference in the magnitude of the general Cueing effect between the informative and uninformative cueing conditions, suggesting that attentionally driven structural selection relies on a direct automatic mapping mechanism from attentional focus to the Subject’s position in a sentence. This mechanism is, therefore, independent of accessing conceptual, and possibly lexical, information about the cued referent provided by referent preview
Experimental Semiotics: A Review
In the last few years a new line of research has appeared in the literature. This line of research, which may be referred to as experimental semiotics (ES; Galantucci, 2009; Galantucci and Garrod, 2010), focuses on the experimental investigation of novel forms of human communication. In this review we will (a) situate ES in its conceptual context, (b) illustrate the main varieties of studies thus far conducted by experimental semioticians, (c) illustrate three main themes of investigation which have emerged within this line of research, and (d) consider implications of this work for cognitive neuroscience
Attention and memory play different roles in syntactic choice during sentence production
Attentional control of referential information is an important contributor to the structure of discourse (Sanford, 2001; Sanford & Garrod, 1981). We investigated how attention and memory interplay during visually situated sentence production. We manipulated speakers’ attention to the agent or the patient of a described event by means of a referential or a dot visual cue (Posner, 1980). We also manipulated whether the cue was implicit or explicit by varying its duration (70 ms versus 700 ms). Participants used passive voice more often when their attention was directed to the patient’s location, regardless of whether the cue duration. This effect was stronger when the cue was explicit rather than implicit, especially for passive-voice sentences. Analysis of sentence onset latencies showed a divergent pattern: Latencies were shorter (1) when the agent was cued, (2) when the cue was explicit and (3) when the (explicit) cue was referential; (1) and (2) indicate facilitated sentence planning when the cue supports a canonical (active voice) sentence frame and when speakers had more time to plan their sentences; (3) suggests that sentence planning was sensitive to whether the cue was informative with regard to the cued referent. We propose that differences between production likelihoods and production latencies indicate distinct contributions from attentional focus and memorial activation to sentence planning: While the former partly predicts syntactic choice, the latter facilitates syntactic assembly (i.e., initiating overt sentence generation)
Speech rhythms and multiplexed oscillatory sensory coding in the human brain
Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG) to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta) and the amplitude of high-frequency (gamma) oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex) attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations
Mechanisms of alignment:Shared control, social cognition and metacognition
In dialogue, speakers process a great deal of information, take and give the floor to each other, and plan and adjust their contributions on the fly. Despite the level of coordination and control that it requires, dialogue is the easiest way speakers possess to come to similar conceptualizations of the world. In this paper, we show how speakers align with each other by mutually controlling the flow of the dialogue and constantly monitoring their own and their interlocutors' way of representing information. Through examples of conversation, we introduce the notions of shared control, meta-representations of alignment and commentaries on alignment, and show how they support mutual understanding and the collaborative creation of abstract concepts. Indeed, whereas speakers can share similar representations of concrete concepts just by mutually attending to a tangible referent or by recalling it, they are likely to need more negotiation and mutual monitoring to build similar representations of abstract concepts. This article is part of the theme issue ‘Concepts in interaction: social engagement and inner experiences’
How to create shared symbols
Human cognition and behavior are dominated by symbol use. This paper examines the social learning strategies that give rise to symbolic communication. Experiment 1 contrasts an individual-level account, based on observational learning and cognitive bias, with an inter-individual account, based on social coordinative learning. Participants played a referential communication game in which they tried to communicate a range of recurring meanings to a partner by drawing, but without using their conventional language. Individual-level learning, via observation and cognitive bias, was sufficient to produce signs that became increasingly effective, efficient, and shared over games. However, breaking a referential precedent eliminated these benefits. The most effective, most efficient, and most shared signs arose when participants could directly interact with their partner, indicating that social coordinative learning is important to the creation of shared symbols. Experiment 2 investigated the contribution of two distinct aspects of social interaction: behavior alignment and concurrent partner feedback. Each played a complementary role in the creation of shared symbols: Behavior alignment primarily drove communication effectiveness, and partner feedback primarily drove the efficiency of the evolved signs. In conclusion, inter-individual social coordinative learning is important to the evolution of effective, efficient, and shared symbols
Conversational interaction in the scanner:mentalizing during language processing as revealed by MEG
Humans are especially good at taking another's perspective-representing what others might be thinking or experiencing. This "mentalizing" capacity is apparent in everyday human interactions and conversations. We investigated its neural basis using magnetoencephalography. We focused on whether mentalizing was engaged spontaneously and routinely to understand an utterance's meaning or largely on-demand, to restore "common ground" when expectations were violated. Participants conversed with 1 of 2 confederate speakers and established tacit agreements about objects' names. In a subsequent "test" phase, some of these agreements were violated by either the same or a different speaker. Our analysis of the neural processing of test phase utterances revealed recruitment of neural circuits associated with language (temporal cortex), episodic memory (e.g., medial temporal lobe), and mentalizing (temporo-parietal junction and ventromedial prefrontal cortex). Theta oscillations (3-7 Hz) were modulated most prominently, and we observed phase coupling between functionally distinct neural circuits. The episodic memory and language circuits were recruited in anticipation of upcoming referring expressions, suggesting that context-sensitive predictions were spontaneously generated. In contrast, the mentalizing areas were recruited on-demand, as a means for detecting and resolving perceived pragmatic anomalies, with little evidence they were activated to make partner-specific predictions about upcoming linguistic utterances
Prediction at all levels: forward model predictions can enhance comprehension
We discuss two limitations of Hickok's account. First, we propose that ideas from motor control and planning should be brought wholesale into psycholinguistics so that processing at every level of the linguistic hierarchy (from concepts to sounds) should be recast in terms of forward model predictions and implementation. Second, we argue that motor involvement can sometimes enhance perception. We conclude that our account is consistent with a dual route model of comprehension in which different routes to prediction can interact
- …