14,154 research outputs found
THE CHILD AND THE WORLD: How Children acquire Language
HOW CHILDREN ACQUIRE LANGUAGE
Over the last few decades research into child language acquisition has been revolutionized by the use of ingenious new techniques which allow one to investigate what in fact infants (that is children not yet able to speak) can perceive when exposed to a stream of speech sound, the
discriminations they can make between different speech sounds, differentspeech sound sequences and different words. However on the central features of the mystery, the extraordinarily rapid acquisition of lexicon and complex syntactic structures, little solid progress has been made. The questions being researched are how infants acquire and produce the speech sounds (phonemes) of the community language; how infants find words in the stream of speech; and how they link words to perceived objects or action, that is, discover meanings. In a recent general review in Nature of children's language acquisition, Patricia Kuhl also asked why we do not learn new languages as easily at 50 as at 5 and why computers have not cracked the human linguistic code. The motor theory of language function and origin makes possible a plausible account of child language acquisition generally from which answers can be derived also to these further questions. Why computers so far have been unable to 'crack' the language problem becomes apparent in the light of the motor theory account: computers can have no natural relation between words and their meanings; they have no conceptual store to which the
network of words is linked nor do they have the innate aspects of language functioning - represented by function words; computers have no direct links between speech sounds and movement patterns and they do not have the instantly integrated neural patterning underlying thought - they necessarily operate serially and hierarchically. Adults find the acquisition of a new language much more difficult than children do because they are already neurally committed to the link between the words of their first language and the elements in their conceptual store. A second language being acquired by an adult is in direct
competition for neural space with the network structures established for the first language
Recommended from our members
The use and function of gestures in word-finding difficulties in aphasia
Background: Gestures are spontaneous hand and arm movements that are part of everyday communication. The roles of gestures in communication are disputed. Most agree that they augment the information conveyed in speech. More contentiously, some argue that they facilitate speech, particularly when word-finding difficulties (WFD) occur. Exploring gestures in aphasia may further illuminate their role.
Aims: This study explored the spontaneous use of gestures in the conversation of participants with aphasia (PWA) and neurologically healthy participants (NHP). It aimed to examine the facilitative role of gesture by determining whether gestures particularly accompanied WFD and whether those difficulties were resolved.
Methods & Procedures: Spontaneous conversation data were collected from 20 PWA and 21 NHP. Video samples were analysed for gesture production, speech production, and WFD. Analysis 1 examined whether the production of semantically rich gestures in these conversations was affected by whether the person had aphasia, and/or whether there were difficulties in the accompanying speech. Analysis 2 identified all WFD in the data and examined whether these were more likely to be resolved if accompanied by a gesture, again for both groups of participants.
Outcomes & Results: Semantically rich gestures were frequently employed by both groups of participants, but with no effect of group. There was an effect of the accompanying speech, with gestures occurring most commonly alongside resolved WFD. An interaction showed that this was particularly the case for PWA. NHP, on the other hand, employed semantically rich gestures most frequently alongside fluent speech. Analysis 2 showed that WFD were common in both groups of participants. Unsurprisingly, these were more likely to be resolved for NHP than PWA. For both groups, resolution was more likely if a WFD was accompanied by a gesture.
Conclusions: These findings shed light on the different functions of gesture within conversation. They highlight the importance of gesture during WFD, both in aphasic and neurologically healthy language, and suggest that gesture may facilitate word retrieval
Conceptual and lexical effects on gestures: the case of vertical spatial metaphors for time in Chinese
The linguistic metaphors of time appear to influence how people gesture about time. This study finds that Chinese English bilinguals produce more vertical gestures when talking about Chinese time references with vertical spatial metaphors than (1) when talking about time conceptions in the English translations, and (2) when talking about Chinese time references with no spatial metaphors. Additionally, Chinese English bilinguals prefer vertical gestures to lateral gestures when perceiving Chinese time references with vertical spatial metaphors and the corresponding English translations, whereas there is no such preference when perceiving time references without spatial metaphors. Furthermore, this vertical tendency is not due to the fact that vertical gestures are generally less ambiguous than lateral gestures for addressees. In conclusion, the vertical gesturing about time by Chinese English bilinguals is shaped by both the stable language-specific conceptualisations, and the online changes in linguistic choices
The Verbal and Non Verbal Signals of Depression -- Combining Acoustics, Text and Visuals for Estimating Depression Level
Depression is a serious medical condition that is suffered by a large number
of people around the world. It significantly affects the way one feels, causing
a persistent lowering of mood. In this paper, we propose a novel
attention-based deep neural network which facilitates the fusion of various
modalities. We use this network to regress the depression level. Acoustic, text
and visual modalities have been used to train our proposed network. Various
experiments have been carried out on the benchmark dataset, namely, Distress
Analysis Interview Corpus - a Wizard of Oz (DAIC-WOZ). From the results, we
empirically justify that the fusion of all three modalities helps in giving the
most accurate estimation of depression level. Our proposed approach outperforms
the state-of-the-art by 7.17% on root mean squared error (RMSE) and 8.08% on
mean absolute error (MAE).Comment: 10 pages including references, 2 figure
Conceptual and lexical effects on gestures:the case of vertical spatial metaphors for time in Chinese
The linguistic metaphors of time appear to influence how people gesture about time. This study finds that Chinese English bilinguals produce more vertical gestures when talking about Chinese time references with vertical spatial metaphors than (1) when talking about time conceptions in the English translations, and (2) when talking about Chinese time references with no spatial metaphors. Additionally, Chinese English bilinguals prefer vertical gestures to lateral gestures when perceiving Chinese time references with vertical spatial metaphors and the corresponding English translations, whereas there is no such preference when perceiving time references without spatial metaphors. Furthermore, this vertical tendency is not due to the fact that vertical gestures are generally less ambiguous than lateral gestures for addressees. In conclusion, the vertical gesturing about time by Chinese English bilinguals is shaped by both the stable language-specific conceptualisations, and the online changes in linguistic choices
A systematic investigation of gesture kinematics in evolving manual languages in the lab
Item does not contain fulltextSilent gestures consist of complex multi-articulatory movements but are now primarily studied through categorical coding of the referential gesture content. The relation of categorical linguistic content with continuous kinematics is therefore poorly understood. Here, we reanalyzed the video data from a gestural evolution experiment (Motamedi, Schouwstra, Smith, Culbertson, & Kirby, 2019), which showed increases in the systematicity of gesture content over time. We applied computer vision techniques to quantify the kinematics of the original data. Our kinematic analyses demonstrated that gestures become more efficient and less complex in their kinematics over generations of learners. We further detect the systematicity of gesture form on the level of thegesture kinematic interrelations, which directly scales with the systematicity obtained on semantic coding of the gestures. Thus, from continuous kinematics alone, we can tap into linguistic aspects that were previously only approachable through categorical coding of meaning. Finally, going beyond issues of systematicity, we show how unique gesture kinematic dialects emerged over generations as isolated chains of participants gradually diverged over iterations from other chains. We, thereby, conclude that gestures can come to embody the linguistic system at the level of interrelationships between communicative tokens, which should calibrate our theories about form and linguistic content.29 p
- …