55 research outputs found

    Zeichensprache und erfolgreiche bilinguale Entwicklung bei sprachbehinderten Kindern

    Get PDF
    This paper reviews research on language development of deaf children, comparing those who have early access to natural sign language with those who do not. Early learning of sign language does not create concerns for the child\u27s development of other languages, speech, reading, or other cognitive skills. In fact, it can contribute directly to establishment of more of the high-level skills needed for successful bilingual development. The global benefit of learning a sign language as a first language is that in the resulting bilingual communicative setting, teachers and learners can take advantage of one language to assist in acquiring the other and in the transfer of general knowledge. As part of this discussion, English and ASL are compared as representatives of spoken and signed natural languages to provide explicit examples of their similarities and differences.Rad prikazuje istraživanja o jezičnom razvoju gluhe djece, uspoređujući onu koja se rano počinju sporazumijevati znakovima i onu koja to ne čine. Rano učenje znakovnog jezika ne stvara djetetu teškoće u svladavanju drugih jezika, govoru, čitanju ili drugim kognitivnim vještinama. Naprotiv, ono može izravno pridonijeti stvaranju većega broja razvijenih vještina potrebnih za uspješan dvojezični razvoj. Opća korist učenja znakovnoga jezika kao prvog jezika je ta da u proizlazećem dvojezičnom komunikacijskom okružju učitelji i učenici mogu iskoristiti jedan jezik koji će pomoći pri usvajanju drugoga te potaknuti prijenos općega znanja. U okviru ove rasprave, autorica uspoređuje engleski jezik i ASL (američki znakovni jezik) kao predstavnike govornoga i znakovnoga prirodnog jezika, kako bi dala jasne primjere njihovih sličnosti i razlika.Dieser Artikel präsentiert eine Untersuchung über die Entwicklung sprachbehinderter Kinder. Es geht konkret um einen Vergleich zwischen Kindern, die sich früh mit dem Gebrauch der Zeichensprache vertraut machen, und solchen, die sich auf andere Weise verständigen. Der frühe Erwerb der Zeichensprache bereitet dem Kind keinerlei Schwierigkeiten beim Erwerb anderer Sprachen, beim Sprechen, Lesen oder bei anderen kognitiven Fähigkeiten. Im Gegenteil: Die Beherrschung der Zeichensprache kann unmittelbar zur Entwicklung einer größeren Zahl von Fähigkeiten beitragen, die die Voraussetzung für eine erfolgreiche bilinguale Entwicklung des Kindes sind. Der allgemeine Nutzen vom Erwerb der Zeichensprache als der ersten Sprache besteht darin, dass in dem sich ergebenden zweisprachigen Kommunikationsumfeld Lehrer und Schüler die erste Sprache als Lernstütze beim Erwerb der zweiten Sprache verwenden und so außerdem die Vermittlung von allgemeinen Kenntnissen anregen können. Der Verfasser des Artikels stellt einen Vergleich zwischen dem Englischen und der Amerikanischen Zeichensprache (ASL) an, welche zum einen die gesprochene und zum anderen eine natürliche Zeichensprache darstellen, und führt klare Beispiele zum Beleg ihrer Ähnlichkeiten und Unterschiede an

    Prosody in Sign Languages

    Get PDF
    This chapter addresses the debate concerning the status of nonmanuals (head, face, body) as prosodic or not by exploring in detail how prosody is structured in speech and what might be parallels and differences in sign. Prosody is divided into two parts, rhythmic phrasing (timing, syllables, stress), and intonation. To maximize accessibility, in each part, an introduction to what is known for speech is presented, followed by what is known and/or claimed for sign languages. With the exception of the internal structure of syllables, sign languages are very similar to spoken languages in the rhythmic domain. In the intonational domain, the parallels are less strong, in part because analogies of nonmanual functions to spoken intonation tend to be based on older/simpler models of intonation. There needs to be much more detailed research on sign languages to catch up with the recent research on spoken intonation

    #ALL versus ALL in American Sign Language (ASL)

    Get PDF
    This paper extends a visible pattern (“iconicity”) that has been observed in sign language verbs and adjectives to quantification in American Sign Language (ASL). The Event Visibility Hypothesis (EVH) states that boundedness is morphophonologically encoded in articulation of a rapid deceleration of movement at the end of a sign (aka end-marking). Here the EVH is applied to the two ASL quantifiers glossed #ALL and ALL. Doing so accounts for the semantic distinction between them: ALL is definite (bounded), whereas #ALL is underspecified for definiteness (unbounded)

    EEG analysis based on dynamic visual stimuli: best practices in analysis of sign language data

    Get PDF
    This paper reviews best practices for experimental design and analysis for sign language research using neurophysiological methods, such as electroencephalography (EEG) and other methods with high temporal resolution, as well as identifies methodological challenges in neurophysiological research on natural sign language processing. In particular, we outline the considerations for generating linguistically and physically well-controlled stimuli accounting for 1) the layering of manual and non-manual information at different timescales, 2) possible unknown linguistic and non-linguistic visual cues that can affect processing, 3) variability across linguistic stimuli, and 4) predictive processing. Two specific concerns with regard to the analysis and interpretation of observed event related potential (ERP) effects for dynamic stimuli are discussed in detail. First, we discuss the “trigger/effect assignment problem”, which describes the difficulty of determining the time point for calculating ERPs. This issue is related to the problem of determining the onset of a critical sign (i.e., stimulus onset time), and the lack of clarity as to how the border between lexical (sign) and transitional movement (motion trajectory between individual signs) should be defined. Second, we discuss possible differences in the dynamics within signing that might influence ERP patterns and should be controlled for when creating natural sign language material for ERP studies. In addition, we outline alternative approaches to EEG data analyses for natural signing stimuli, such as the timestamping of continuous EEG with trigger markers for each potentially relevant cue in dynamic stimuli. Throughout the discussion, we present empirical evidence for the need to account for dynamic, multi-channel, and multi-timescale visual signal that characterizes sign languages in order to ensure the ecological validity of neurophysiological research in sign languages

    Visual boundaries in sign motion: processing with and without lip-reading cues

    Get PDF
    Sign languages demonstrate a higher degree of iconicity than spoken languages. Studies on a number of unrelated sign languages show that the event structure of verb signs is reflected in the phonological form of the signs (Wilbur (2008), Malaia & Wilbur (2012), Krebs et al. (2021)). Previous research showed that hearing non-signers (with no prior exposure to sign language) can use the iconicity inherent in the visual dynamics of a verb sign to correctly identify its event structure (telic vs. atelic). In two EEG experiments, hearing non-signers were presented with telic and atelic verb signs unfamiliar to them, which they had to classify in a two-choice lexical decision task in their native language. The first experiment assessed the timeline of neural processing mechanisms in non-signers processing telic/atelic signs without access to lip-reading cues in their native language, to understand the pathways for incorporation of physical perceptual motion features into linguistic processing. The second experiment further probed the impact of visual information provided by lip-reading (speech decoding based on visual information from the face of the speaker, most importantly, the lips) on the processing of telic/atelic signs in non-signers

    The Compositional Nature of Verb and Argument Representations in the Human Brain

    Get PDF
    How does the human brain represent simple compositions of objects, actors,and actions? We had subjects view action sequence videos during neuroimaging (fMRI) sessions and identified lexical descriptions of those videos by decoding (SVM) the brain representations based only on their fMRI activation patterns. As a precursor to this result, we had demonstrated that we could reliably and with high probability decode action labels corresponding to one of six action videos (dig, walk, etc.), again while subjects viewed the action sequence during scanning (fMRI). This result was replicated at two different brain imaging sites with common protocols but different subjects, showing common brain areas, including areas known for episodic memory (PHG, MTL, high level visual pathways, etc.,i.e. the 'what' and 'where' systems, and TPJ, i.e. 'theory of mind'). Given these results, we were also able to successfully show a key aspect of language compositionality based on simultaneous decoding of object class and actor identity. Finally, combining these novel steps in 'brain reading' allowed us to accurately estimate brain representations supporting compositional decoding of a complex event composed of an actor, a verb, a direction, and an object.Comment: 11 pages, 6 figure

    The Compositional Nature of Event Representations in the Human Brain

    Get PDF
    How does the human brain represent simple compositions of constituents: actors, verbs, objects, directions, and locations? Subjects viewed videos during neuroimaging (fMRI) sessions from which sentential descriptions of those videos were identified by decoding the brain representations based only on their fMRI activation patterns. Constituents (e.g., fold and shirt) were independently decoded from a single presentation. Independent constituent classification was then compared to joint classification of aggregate concepts (e.g., fold-shirt); results were similar as measured by accuracy and correlation. The brain regions used for independent constituent classification are largely disjoint and largely cover those used for joint classification. This allows recovery of sentential descriptions of stimulus videos by composing the results of the independent constituent classifiers. Furthermore, classifiers trained on the words one set of subjects think of when watching a video can recognise sentences a different subject thinks of when watching a different video

    Seeing is Worse than Believing: Reading People’s Minds Better than Computer-Vision Methods Recognize Actions

    Get PDF
    We had human subjects perform a one-out-of-six class action recognition task from video stimuli while undergoing functional magnetic resonance imaging (fMRI). Support-vector machines (SVMs) were trained on the recovered brain scans to classify actions observed during imaging, yielding average classification accuracy of 69.73% when tested on scans from the same subject and of 34.80% when tested on scans from different subjects. An apples-to-apples comparison was performed with all publicly available software that implements state-of-the-art action recognition on the same video corpus with the same cross-validation regimen and same partitioning into training and test sets, yielding classification accuracies between 31.25% and 52.34%. This indicates that one can read people’s minds better than state-of-the-art computer-vision methods can perform action recognition.This work was supported, in part, by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF - 1231216. AB, DPB, NS, and JMS were supported, in part, by Army Research Laboratory (ARL) Cooperative Agreement W911NF-10-2-0060, AB, in part, by the Center forBrains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216, WC, CX, and JJC, in part, by ARL Cooperative Agreement W911NF-10-2-0062 and NSF CAREER grant IIS-0845282, CDF, in part, by NSF grant CNS-0855157, CH and SJH, in part, by the McDonnell Foundation, and BAP, in part, by Science Foundation Ireland grant 09/IN.1/I2637
    corecore