223 research outputs found
Contact points between lexical retrieval and sentence production
Speakers retrieve words to use them in sentences. Errors in incorporating words into sentential frames are revealing with respects to the lexical units as well as the lexical retrieval mechanism; hence they constrain theories of lexical access. We present a reanalysis of a corpus of spontaneously occurring lexical exchange errors that highlights the contact points between lexical and sentential processed
Dissociation between regular and irregular in connectionist architectures: Two processes, but still no special linguistic rules
Dual-mechanism models of language maintain a distinction between a lexicon and a computational system of linguistic rules. In his target article, Clahsen provides support for such a distinction, presenting evidence from German inflections. He argues for a structured lexicon, going beyond the strict lexicon versus rules dichotomy. We agree with the author in assuming a dual mechanism; however, we argue that a next step must be taken, going beyond the notion of the computational system as specific rules applying to a linguistic domain. By assuming a richer lexicon, the computational system can be conceived as a more general binding process that applies to different linguistic levels: syntas, morphology, reading, and spelling
I don’t see what you’re saying: The maluma/takete effect does not depend on the visual appearance of phonemes as they are articulated
In contrast to the principle of arbitrariness, recent work has shown that language can iconically depict referents being talked about. One such example is the maluma/takete effect: an association between certain phonemes (e.g., those in maluma) and round shapes, and other phonemes (e.g., those in takete and spiky shapes). An open question has been whether this association is crossmodal (arising from phonemes’ sound or kinesthetics) or unimodal (arising from phonemes’ visual appearance). In the latter case, individuals may associate a person’s rounded lips as they pronounce the /u/ in maluma with round shapes. We examined this hypothesis by having participants pair nonwords with shapes in either an audio-only condition (they only heard nonwords) or an audiovisual condition (they both heard nonwords and saw them articulated). We found no evidence that seeing nonwords articulated enhanced the maluma/takete effect. In fact, there was evidence that it decreased it in some cases. This was confirmed with a Bayesian analysis. These results eliminate a plausible explanation for the maluma/takete effect, as an instance of visual matching. We discuss the alternate possibility that it involves crossmodal associations
Reading Sky and Seeing a Cloud: On the Relevance of Events for Perceptual Simulation
Previous research has shown that processing words with an up/down association (e.g., bird, foot) can influence the subsequent identification of visual targets in congruent location (at the top/bottom of the screen). However, as facilitation and interference were found under similar conditions, the nature of the underlying mechanisms remained unclear. We propose that word comprehension relies on the perceptual simulation of a prototypical event involving the entity denoted by a word in order to provide a general account of the different findings. In 3 experiments, participants had to discriminate between 2 target pictures appearing at the top or the bottom of the screen by pressing the left versus right button. Immediately before the targets appeared, they saw an up/down word belonging to the target’s event, an up/down word unrelated to the target, or a spatially neutral control word. Prime words belonging to target event facilitated identification of targets at a stimulus onset asynchrony (SOA) of 250 ms (Experiment 1), but only when presented in the vertical location where they are typically seen, indicating that targets were integrated in the simulations activated by the prime words. Moreover, at the same SOA, there was a robust facilitation effect for targets appearing in their typical location regardless of the prime type. However, when words were presented for 100 ms (Experiment 2) or 800 ms (Experiment 3), only a location nonspecific priming effect was found, suggesting that the visual system was not activated. Implications for theories of semantic processing are discussed
Situating Language in the Real-World: The Role of Multimodal Iconicity and Indexicality
In the last decade, a growing body of work has convincingly demonstrated that languages embed a certain degree of non-arbitrariness (mostly in the form of iconicity, namely the presence of imagistic links between linguistic form and meaning). Most of this previous work has been limited to assessing the degree (and role) of non-arbitrariness in the speech (for spoken languages) or manual components of signs (for sign languages). When approached in this way, non-arbitrariness is acknowledged but still considered to have little presence and purpose, showing a diachronic movement towards more arbitrary forms. However, this perspective is limited as it does not take into account the situated nature of language use in face-to-face interactions, where language comprises categorical components of speech and signs, but also multimodal cues such as prosody, gestures, eye gaze etc. We review work concerning the role of context-dependent iconic and indexical cues in language acquisition and processing to demonstrate the pervasiveness of non-arbitrary multimodal cues in language use and we discuss their function. We then move to argue that the online omnipresence of multimodal non-arbitrary cues supports children and adults in dynamically developing situational models
The impact of child-directed language on children’s lexical development
This study investigated (1) whether and how English caregivers adjust their speech (i.e., mean length of utterances, lexical diversity, lexical sophistication, sentence types, and deixis) according to different contexts, children’s knowledge, and age, and (2) which aspects of parental speech input predict children’s immediate learning of novel words as well as their vocabulary size. We studied a semi-naturalistic corpus, in which English caregivers talked to their children (3-4 years old) about toys that were present or absent, and known or unknown to the children. We found that caregivers flexibly adjusted various aspects of their speech to maintain an informative and engaging learning environment. Furthermore, we found that rich lexicon and yes-no questions predict better immediate word learning, whereas caregivers' lexical diversity, lexical frequency, the use of Yes-No questions are related to children’s general vocabulary size. In conclusion, higher quality of caregivers’ language predicts better immediate word learning and vocabulary size
Recommended from our members
Mapping language to the world: the role of iconicity in the sign language input
Most research on the mechanisms underlying referential mapping has assumed that learning occurs in ostensive contexts, where label and referent co-occur, and that form and meaning are linked by arbitrary convention alone. In the present study, we focus on iconicity in language, that is, resemblance relationships between form and meaning, and on non-ostensive contexts, where label and referent do not co-occur. We approach the question of language learning from the perspective of the language input. Specifically, we look at child-directed language (CDL) in British Sign Language (BSL), a language rich in iconicity due to the affordances of the visual modality. We ask whether child-directed signing exploits iconicity in the language by highlighting the similarity mapping between form and referent. We find that CDL modifications occur more often with iconic signs than with non-iconic signs. Crucially, for iconic signs, modifications are more frequent in non-ostensive contexts than in ostensive contexts. Furthermore, we find that pointing dominates in ostensive contexts, and suggest that caregivers adjust the semiotic resources recruited in CDL to context. These findings offer first evidence for a role of iconicity in the language input and suggest that iconicity may be involved in referential mapping and language learning, particularly in non-ostensive contexts
Social interaction is a catalyst for adult human learning in online contexts
Human learning is highly social. Advances in technology have increasingly moved learning online, and the recent coronavirus disease 2019 (COVID-19) pandemic has accelerated this trend. Online learning can vary in terms of how “socially” the material is presented (e.g., live or recorded), but there are limited data on which is most effective, with the majority of studies conducted on children and inconclusive results on adults. Here, we examine how young adults (aged 18–35) learn information about unknown objects, systematically varying the social contingency (live versus recorded lecture) and social richness (viewing the teacher’s face, hands, or slides) of the learning episodes. Recall was tested immediately and after 1 week. Experiment 1 (n = 24) showed better learning for live presentation and a full view of the teacher (hands and face). Experiment 2 (n = 27; pre-registered) replicated the live-presentation advantage. Both experiments showed an interaction between social contingency and social richness: the presence of social cues affected learning differently depending on whether teaching was interactive or not. Live social interaction with a full view of the teacher’s face provided the optimal setting for learning new factual information. However, during observational learning, social cues may be more cognitively demanding and/or distracting,resulting in less learning from rich social information if there is no interactivity. We suggest that being part of a genuine social interaction catalyzes learning, possibly via mechanisms of joint attention, common ground, or (inter-)active discussion, and as such, interactive learning benefits from rich social setting
Speaking of shape: The effects of language-specific encoding on semantic representations
The question of whether different linguistic patterns differentially influence semantic and conceptual representations is of central interest in cognitive science. In this paper, we investigate whether the regular encoding of shape within a nominal classification system leads to an increased salience of shape in speakers' semantic representations by comparing English, (Amazonian) Spanish, and Bora, a shape-based classifier language spoken in the Amazonian regions of Columbia and Peru. Crucially, in displaying obligatory use, pervasiveness in grammar, high discourse frequency, and phonological variability of forms corresponding to particular shape features, the Bora classifier system differs in important ways from those in previous studies investigating effects of nominal classification, thereby allowing better control of factors that may have influenced previous findings. In addition, the inclusion of Spanish monolinguals living in the Bora village allowed control for the possibility that differences found between English and Bora speakers may be attributed to their very different living environments. We found that shape is more salient in the semantic representation of objects for speakers of Bora, which systematically encodes shape, than for speakers of English and Spanish, which do not. Our results are consistent with assumptions that semantic representations are shaped and modulated by our specific linguistic experiences
The role of iconic gestures and mouth movements in face-to-face communication
Human face-to-face communication is multimodal: it comprises speech as well as visual cues, such as articulatory and limb gestures. In the current study, we assess how iconic gestures and mouth movements influence audiovisual word recognition. We presented video clips of an actress uttering single words accompanied, or not, by more or less informative iconic gestures. For each word we also measured the informativeness of the mouth movements from a separate lipreading task. We manipulated whether gestures were congruent or incongruent with the speech, and whether the words were audible or noise vocoded. The task was to decide whether the speech from the video matched a previously seen picture. We found that congruent iconic gestures aided word recognition, especially in the noise-vocoded condition, and the effect was larger (in terms of reaction times) for more informative gestures. Moreover, more informative mouth movements facilitated performance in challenging listening conditions when the speech was accompanied by gestures (either congruent or incongruent) suggesting an enhancement when both cues are present relative to just one. We also observed (a trend) that more informative mouth movements speeded up word recognition across clarity conditions, but only when the gestures were absent. We conclude that listeners use and dynamically weight the informativeness of gestures and mouth movements available during face-to-face communication
- …