202 research outputs found
What do English speakers know about gera-gera and yota-yota?: A cross-linguistic investigation of mimetic words for laughing and walking
The relation between word form and meaning is considered arbitrary; however, Japanese mimetic words, giseigo and gitaigo , are exceptions. For giseigo (words mimicking voices), there is a direct resemblance(âiconicityâ) between the sound of the word and the sound it refers to; for gitaigo (words that mimic manners/states) there is a symbolic relationship (âsound symbolismâ) between the
sound and the manner/state to which the word refers. While native speakers intuitively recognize these relationships, it is questionable whether speakers of other languages are able to access the meaning of Japanese mimetic words from their sounds. In the current study, we asked native English
speakers with no prior experience with the Japanese language to listen to Japanese mimetic words for laughing (giseigo) and for walking (gitaigo), and rate each wordâs meaning on semantic differential scales (e.g.,âGRACEFUL-VULGARâ(laughing,âGRACEFUL-CLUMSYâ(walking). We compared English and Japanese speakersâ ratings and found that English speakers construed many of the features of laughing in a similar manner as Japanese native speakers (e.g., words containing /a/ were rated as more amused, cheerful, nice and pleasant laughs). They differed only with regard to a few sound-meaning relationships of an evaluative nature (e.g., words for laughing containing /u/ were
rated as more feminine and graceful, and those containing /e/ were rated as less graceful and unpleasant). In contrast, for the words referring to walking, English speakersâ ratings differed greatly from native Japanese speakersâ. Native Japanese speakers rated words beginning with voiced consonants as referring to a big person walking with big strides, and words beginning with voiceless consonants as more even-paced, feminine and formal walking; English speakers were sensitive only to the relation between voiced consonants and a big person walking. Hence, some sound-meaning associations were language-specific. This study also confirmed the more conventional and lexicalized nature of the mimetic words of manner
Are words equally surprising in audio and audio-visual comprehension?
We report a controlled study investigating the effect of visual information
(i.e., seeing the speaker) on spoken language comprehension. We compare the ERP
signature (N400) associated with each word in audio-only and audio-visual
presentations of the same verbal stimuli. We assess the extent to which
surprisal measures (which quantify the predictability of words in their lexical
context) are generated on the basis of different types of language models
(specifically n-gram and Transformer models) that predict N400 responses for
each word. Our results indicate that cognitive effort differs significantly
between multimodal and unimodal settings. In addition, our findings suggest
that while Transformer-based models, which have access to a larger lexical
context, provide a better fit in the audio-only setting, 2-gram language models
are more effective in the multimodal setting. This highlights the significant
impact of local lexical context on cognitive processing in a multimodal
environment.Comment: In CogSci 202
Speaking Rate in 3-4-Year-Old Children: Its Correlation with Gesture Rate and Word Learning
Past research has shown that while speaking children before 3-
year-old often use gesture to supplement speech while not
using gesture as an integrated system with speech, and that the
relationship between speech and gesture may relate to
vocabulary development. However, such a relationship is
unknown in 3-4-year-old children, a period in which we can
capture key developmental changes from using gestures alone
to using them along with speech. Using a new corpus of seminaturalistic interaction between caregivers and their 3-4-yearold children (ECOLANG Corpus), this study investigates (1)
the effect of age on childrenâs speaking and gesture rate, (2)
the relationship between speaking and gesture rates and (3)
their correlation with word learning. Specifically, we studied
speaking and gesture rates of 32 English-speaking children
while talking with their caregivers about sets of pre-selected
toys. The children completed a vocabulary test at the time of
the experiment and one year later. Results show that there was
no effect of age on speaking and gesture rates at this age
range, but we found that children with a fast speaking rate also
had a higher gesture rate. Additionally, neither speaking rate
nor gesture rate correlates with word. Thus, our findings show
that by this age, children use gestures that are integrated with
speech and their relationship is no longer a predictor of
vocabulary learning. We speculate that the transition in the
relationship is mainly a result of enhanced conceptual
representation abilit
Poor written pragmatic skills are associated with internalising symptoms in childhood: evidence from a UK birth cohort study
Introduction: This study examined the relation between pragmatic language and internalising (depressive and anxiety) symptoms in 11-year-olds, using data from the 1958 British birth cohort study.
Methods: The cohort children were asked at age 11 to write an essay on their life as they imagined it would be at age 25. We analysed 200 of these essays for relevance, organisation and context-dependent references.
Results: We found associations between these aspects of pragmatic language and children's internalising symptom scores across parent and teacher ratings, even after adjustment for cognitive ability, socioeconomic position and structural language. Most notably, children writing more coherent essays had fewer teacher-rated internalising symptoms, after adjustment for confounders. Additionally, children who provided more relevant and varied information about their imagined future home-lives had fewer parent-rated internalising symptoms, after adjustment for confounders.
Discussion: The unique associations between pragmatic language skills and internalising symptoms observed are notable but preliminary, highlighting both the need for further research and potential applications for risk-assessment tools
Recommended from our members
From Words to Behaviour via Semantic Networks
The contents and structure of semantic networks have
been the focus of much recent research, with major
advances in the development of distributional models. In
parallel, connectionist modeling has extended our
knowledge of the processes engaged in semantic
activation. However, these two lines of investigation have
rarely brought together. Here, starting from a standard
textual model of semantics, we allow activation to spread
throughout its associated semantic network, as dictated by
the patterns of semantic similarity between words. We
find that the activation profile of the network, measured
at various time points, can successfully account for
response times in the lexical decision task, as well as for
subjective concreteness and imageability ratings
Iconicity as a General Property of Language: Evidence from Spoken and Signed Languages
Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings found in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers) exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to âhook upâ to motor, perceptual, and affective experience
Processing advantage for emotional words in bilingual speakers
Effects of emotion on word processing are well established in monolingual speakers. However, studies that have assessed whether affective features of words undergo the same processing in a native and non-native language have provided mixed results: studies that have found differences between L1 and L2 processing, attributed it to the fact that a second language (L2) learned late in life would not be processed affectively, because affective associations are established during childhood. Other studies suggest that adult learners show similar effects of emotional features in L1 and L2. Differences in affective processing of L2 words can be linked to age and context of learning, proficiency, language dominance, and degree of similarity between the L2 and the L1. Here, in a lexical decision task on tightly matched negative, positive and neutral words, highly proficient English speakers from typologically different L1 showed the same facilitation in processing emotionally valenced words as native English speakers, regardless of their L1, the age of English acquisition or the frequency and context of English use
- âŠ