20,114 research outputs found

    Three event-related potential studies on phonological, morpho-syntactic, and semantic aspects

    Get PDF
    Sign languages have often been the subject of imaging studies investigating the underlying neural correlates of sign language processing. To the contrary, much less research has been conducted on the time-course of sign language processing. There are only a small number of event-related potential (ERP) studies that investigate semantic or morpho-syntactic anomalies in signed sentences. Due to specific properties of the manual-visual modality, sign languages differ from spoken languages in two respects: On the one hand, they are produced in a three-dimensional signing space, on the other hand, sign languages can use several (manual and nonmanual) articulators simul¬taneously. Thus, sign languages have modality-specific characteristics that have an impact on the way they are processed. This thesis presents three ERP studies on different linguistic aspects processed in German Sign Language (DGS) sentences. Chapter 1 investigates the hypothesis of a forward model perspec¬tive on prediction. In a semantic expectation mismatch design, deaf native signers saw videos with DGS sentences that ended in semantically expected or unexpected signs. Since sign languages entail relatively long transition phases between one sign and the next, we tested whether a prediction error of the upcoming sign is already detectable prior to the actual sign onset. Unexpected signs engendered an N400 previous to the critical sign onset that was thus elicited by properties of the transition phase. Chapter 2 presents a priming study on cross-modal cross-language co-activation. Deaf bimodal bilingual participants saw DGS sentences that contained prime-target pairs in one of two priming conditions. In overt phonological priming, prime and target signs were phonologically minimal pairs, while in covert orthographic priming, German translations of prime and target were orthographic minimal pairs, but there was no overlap between the signs. Target signs with overt phonological or with covert orthographic overlap engendered a reduced negativity in the electrophysiological signal. Thus, deaf bimodal bilinguals co-activate their second language (written) German unconsciously during processing sentences in their native sign language. Chapter 3 presents two ERP studies investigating the morpho-syntactic aspects of agreement in DGS. One study tested DGS sentences with incorrect, i.e. unspecified, agreement verbs, the other study tested DGS sentences with plain verbs that incorrectly inflected for 3rd person agreement. Agreement verbs that ended in an unspecified location engen¬dered two independent ERP effects: a positive deflection on posterior electrodes (220-570 ms relative to trigger nonmanual cues) and an anterior effect on left frontal electrodes (300-600 ms relative to the sign onset). In contrast, incorrect plain verbs resulted in a broadly distributed positive deflection (420-730 ms relative to the mismatch onset). These results contradict previous findings of agreement violation in sign languages and are discussed to reflect a violation of well-formedness or processes of context-updating. The stimulus materials of all four studies were consistently presented in continuously signed sentences presented in non-manipulated videos. This methodological innovation enabled a distinctive perspective on the time-course of sign language processing

    Parametric synthesis of sign language

    Get PDF
    The isolation of the deaf community from mainstream society is in part due to the lack of knowledge most hearing people have of sign language. To most, there seems to be little need to learn a language that is spoken by such a small minority unless perhaps a relative is unable to hear. Even with a desire to learn, the task may seem insurmountable due to the unique formational and grammatical rules of the language. This linguistic rift has led to the call for an automatic translation system with the ability to take voice or written text as input and produce a comprehensive sequence of signed gestures through computing. This thesis focused on the development of the foundation of a system that would receive English language input and generate a sequence of related signed gestures each synthesized from their basic kinematic parameters. A technique of sign specification for a computer-based translation system was developed through the use of Python objects and functions. Sign definitions, written as Python algorithms, were used to drive the simulation engine of a human-modeling software known as Jack. This research suggests that 3-dimensional computer graphics can be utilized in the production of sign representations that are intelligible and natural in appearance

    Deaf children need language, not (just) speech

    Full text link
    Deaf and Hard of Hearing (DHH) children need to master at least one language (spoken or signed) to reach their full potential. Providing access to a natural sign language supports this goal. Despite evidence that natural sign languages are beneficial to DHH children, many researchers and practitioners advise families to focus exclusively on spoken language. We critique the Pediatrics article ‘Early Sign Language Exposure and Cochlear Implants’ (Geers et al., 2017) as an example of research that makes unsupported claims against the inclusion of natural sign languages. We refute claims that (1) there are harmful effects of sign language and (2) that listening and spoken language are necessary for optimal development of deaf children. While practical challenges remain (and are discussed) for providing a sign language-rich environment, research evidence suggests that such challenges are worth tackling in light of natural sign languages providing a host of benefits for DHH children – especially in the prevention and reduction of language deprivation.Accepted manuscrip

    Towards Subject Independent Sign Language Recognition : A Segment-Based Probabilistic Approach

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Generating realistic, animated human gestures in order to model, analyse and recognize Irish Sign Language

    Get PDF
    The aim of this thesis is to generate a gesture recognition system which can recognize several signs of Irish Sign Language (ISL). This project is divided into three parts. The first part provides background information on ISL. An overview of the ISL structure is a prerequisite to identifying and understanding the difficulties encountered in the development of a recognition system. The second part involves the generation of a data repository: synthetic and real-time video. Initially the synthetic data is created in a 3D animation package in order to simplify the creation of motion variations of the animated signer. The animation environment in our implementation allows for the generation of different versions of the same gesture with slight variations in the parameters of the motion. Secondly a database of ISL real-time video was created. This database contains 1400 different signs, including motion variation in each gesture. The third part details step by step my novel classification system and the associated prototype recognition system. The classification system is constructed as a decision tree to identify each sign uniquely. The recognition system is based on only one component of the classification system and has been implemented as a Hidden Markov Model (HMM)

    Regularization of Dynamic Time Warping Barycenter Averaging, with Applications in Sign Classification

    Get PDF
    Sign language synthesis is a useful tool in addressing many of the issues faced by deaf communities. Sign languages are as different from spoken languages as spoken languages are from each other, and hence deaf persons raised learning sign language are not automatically proficient in communicating in written language. Existing methods of generating signing avatars are clunky and often unintuitive; hence the ability to classify gestures common to sign language using only data recorded by video would simplify the process dramatically. Methods of gesture classification require a way to compare time series, and often (in particular, for k-means clustering) require a notion of average or mean . However, computing the average of a collection of time series is difficult. Time series infer no meaning from the index of a particular frame; only the order, and not the time index, of features confer meaning. Dynamic time warping was developed as a similarity measure between time series, but does not in itself provide a method of averaging. Recently, a method of averaging called DTW Barycenter Averaging (DBA) was developed that is consistent with dynamic time warping. This method produces results suitable for classification and clustering of time series data, and is based on minimizing the within group sum of squares (WGSS) of the data. Because dynamic time warping is time scale invariant, the average is not unique; other warpings of an average may also be averages. We propose a modification to DBA that allows for more flexibility in choosing the time scale of the resulting average. Time penalized DBA (TBA) adds a cooling regularization term to WGSS, making the problem well-posed. The regularization term penalizes the amount of total warping between the average and each other time series; hence features in the average appear closer to the average time at which they appear in the collection. We cool the regularization term to prevent it from altering the solution in undesirable ways. Time penalized DBA is an effective method to average a collection both spatially and temporally, and also reduces the algorithm\u27s sensitivity to initial guess. Unfortunately, the extra parameters it requires make its use more complicated. We will show for a selection of parameters that TBA performs favorably over classical DBA on both artificial signals and on data captured from videos of signs from American Sign Language

    Beyong lexical meaning : probabilistic models for sign language recognition

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    ENGLISH READING ABILITY IN YOUNG DEAF SIGNERS AN INVESTIGATION OF SENTENCE COMPREHENSION

    Get PDF
    We performed a detailed investigation of the robust correlation between ASL and English reading ability in 54 deaf students aged 7;3 to 19;0. Skilled and unskilled signers were assessed on four English sentence structures (actives, passives, pronouns, reflexive pronouns) using a four-alternative forced choice sentence-to-picture-matching task, providing a window into how ASL skill is related to English sentence comprehension. Of interest was the extent to which proficiency in LI provided a foundation for L2 learning as predicted by Cummins’ developmental interdependence hypothesis. Skilled signers outperformed unskilled signers on all sentence types. Error analysis indicated greater word recognition difficulties in unskilled signers. Syntactic structures mapping directly from LI to L2 were more accurately understood than structures mapping in less obvious ways, consistent with MacWhinney’s unified competition model. Our findings provide evidence that increased ASL ability supports English sentence comprehension at the levels of individual words and syntax
    corecore