419,426 research outputs found

    Selectional Restrictions in HPSG

    Full text link
    Selectional restrictions are semantic sortal constraints imposed on the participants of linguistic constructions to capture contextually-dependent constraints on interpretation. Despite their limitations, selectional restrictions have proven very useful in natural language applications, where they have been used frequently in word sense disambiguation, syntactic disambiguation, and anaphora resolution. Given their practical value, we explore two methods to incorporate selectional restrictions in the HPSG theory, assuming that the reader is familiar with HPSG. The first method employs HPSG's Background feature and a constraint-satisfaction component pipe-lined after the parser. The second method uses subsorts of referential indices, and blocks readings that violate selectional restrictions during parsing. While theoretically less satisfactory, we have found the second method particularly useful in the development of practical systems

    A survey on mouth modeling and analysis for Sign Language recognition

    Get PDF
    © 2015 IEEE.Around 70 million Deaf worldwide use Sign Languages (SLs) as their native languages. At the same time, they have limited reading/writing skills in the spoken language. This puts them at a severe disadvantage in many contexts, including education, work, usage of computers and the Internet. Automatic Sign Language Recognition (ASLR) can support the Deaf in many ways, e.g. by enabling the development of systems for Human-Computer Interaction in SL and translation between sign and spoken language. Research in ASLR usually revolves around automatic understanding of manual signs. Recently, ASLR research community has started to appreciate the importance of non-manuals, since they are related to the lexical meaning of a sign, the syntax and the prosody. Nonmanuals include body and head pose, movement of the eyebrows and the eyes, as well as blinks and squints. Arguably, the mouth is one of the most involved parts of the face in non-manuals. Mouth actions related to ASLR can be either mouthings, i.e. visual syllables with the mouth while signing, or non-verbal mouth gestures. Both are very important in ASLR. In this paper, we present the first survey on mouth non-manuals in ASLR. We start by showing why mouth motion is important in SL and the relevant techniques that exist within ASLR. Since limited research has been conducted regarding automatic analysis of mouth motion in the context of ALSR, we proceed by surveying relevant techniques from the areas of automatic mouth expression and visual speech recognition which can be applied to the task. Finally, we conclude by presenting the challenges and potentials of automatic analysis of mouth motion in the context of ASLR

    Does Meaning Evolove?

    Get PDF
    A common method of improving how well understood a theory is, is by comparing it to another theory which has been better developed. Radical interpretation is a theory which attempts to explain how communication has meaning. Radical interpretation is treated as another time dependent theory and compared to the time dependent theory of biological evolution. Several similarities and differences are uncovered. Biological evolution can be gradual or punctuated. Whether radical interpretation is gradual or punctuated depends on how the question is framed: on the coarse-grained time scale it proceeds gradually, but on the fine-grained time scale it proceeds by punctuated equilibria. Biological evolution proceeds by natural selection, the counterpart to this is the increase in both correspondence and coherence. Exaption, mutations, and spandrels have counterparts metaphor, speech errors, and puns respectively. Homologous and analogs have direct counterparts in specific words. The most important differences originate from the existence of a unit of inheritance (the traditional gene) occurring in biological evolution - there is no such unit in language

    Computer-based tracking, analysis, and visualization of linguistically significant nonmanual events in American Sign Language (ASL)

    Full text link
    Our linguistically annotated American Sign Language (ASL) corpora have formed a basis for research to automate detection by computer of essential linguistic information conveyed through facial expressions and head movements. We have tracked head position and facial deformations, and used computational learning to discern specific grammatical markings. Our ability to detect, identify, and temporally localize the occurrence of such markings in ASL videos has recently been improved by incorporation of (1) new techniques for deformable model-based 3D tracking of head position and facial expressions, which provide significantly better tracking accuracy and recover quickly from temporary loss of track due to occlusion; and (2) a computational learning approach incorporating 2-level Conditional Random Fields (CRFs), suited to the multi-scale spatio-temporal characteristics of the data, which analyses not only low-level appearance characteristics, but also the patterns that enable identification of significant gestural components, such as periodic head movements and raised or lowered eyebrows. Here we summarize our linguistically motivated computational approach and the results for detection and recognition of nonmanual grammatical markings; demonstrate our data visualizations, and discuss the relevance for linguistic research; and describe work underway to enable such visualizations to be produced over large corpora and shared publicly on the Web

    Recognition of nonmanual markers in American Sign Language (ASL) using non-parametric adaptive 2D-3D face tracking

    Full text link
    This paper addresses the problem of automatically recognizing linguistically significant nonmanual expressions in American Sign Language from video. We develop a fully automatic system that is able to track facial expressions and head movements, and detect and recognize facial events continuously from video. The main contributions of the proposed framework are the following: (1) We have built a stochastic and adaptive ensemble of face trackers to address factors resulting in lost face track; (2) We combine 2D and 3D deformable face models to warp input frames, thus correcting for any variation in facial appearance resulting from changes in 3D head pose; (3) We use a combination of geometric features and texture features extracted from a canonical frontal representation. The proposed new framework makes it possible to detect grammatically significant nonmanual expressions from continuous signing and to differentiate successfully among linguistically significant expressions that involve subtle differences in appearance. We present results that are based on the use of a dataset containing 330 sentences from videos that were collected and linguistically annotated at Boston University

    PadChest: A large chest x-ray image dataset with multi-label annotated reports

    Get PDF
    We present a labeled large-scale, high resolution chest x-ray dataset for the automated exploration of medical images along with their associated reports. This dataset includes more than 160,000 images obtained from 67,000 patients that were interpreted and reported by radiologists at Hospital San Juan Hospital (Spain) from 2009 to 2017, covering six different position views and additional information on image acquisition and patient demography. The reports were labeled with 174 different radiographic findings, 19 differential diagnoses and 104 anatomic locations organized as a hierarchical taxonomy and mapped onto standard Unified Medical Language System (UMLS) terminology. Of these reports, 27% were manually annotated by trained physicians and the remaining set was labeled using a supervised method based on a recurrent neural network with attention mechanisms. The labels generated were then validated in an independent test set achieving a 0.93 Micro-F1 score. To the best of our knowledge, this is one of the largest public chest x-ray database suitable for training supervised models concerning radiographs, and the first to contain radiographic reports in Spanish. The PadChest dataset can be downloaded from http://bimcv.cipf.es/bimcv-projects/padchest/

    Does Meaning Evolve?

    Get PDF
    A common method of making a theory more understandable, is by comparing it to another theory which has been better developed. Radical interpretation is a theory which attempts to explain how communication has meaning. Radical interpretation is treated as another time-dependent theory and compared to the time dependent theory of biological evolution. The main reason for doing this is to find the nature of the time dependence; producing analogs between the two theories is a necessary prerequisite to this and brings up many problems. Once the nature of the time dependence is better known it might allow the underlying mechanism to be uncovered. Several similarities and differences are uncovered, there appear to be more differences than similarities.Comment: title changed, completely rewritten, new version 37 pages previous version 28 pages, to appear in Behaviour and Philosoph

    Language control is not a one-size-fits-all languages process: Evidence from simultaneous interpretation students and the n-2 repetition cost

    Get PDF
    Simultaneous interpretation is an impressive cognitive feat which necessitates the simultaneous use of two languages and therefore begs the question: how is language management accomplished during interpretation? One possibility is that both languages are maintained active and inhibitory control is reduced. To examine whether inhibitory control is reduced after experience with interpretation, students with varying experience were assessed on a three language switching paradigm. This paradigm provides an empirical measure of the inhibition applied to abandoned languages, the n-2 repetition cost. The groups showed different patterns of n-2 repetition costs across the three languages. These differences, however, were not connected to experience with interpretation. Instead, they may be due to other language characteristics. Specifically, the L2 n-2 repetition cost negatively correlated with self-rated oral L2 proficiency, suggesting that language proficiency may affect the use of inhibitory control. The differences seen in the L1 n-2 repetition cost, alternatively, may be due to the differing predominant interactional contexts of the groups. These results suggest that language control may be more complex than previously thought, with different mechanisms used for different languages. Further, these data represent the first use of the n-2 repetition cost as a measure to compare language control between groups

    A Study of The Feasibility of Establishing a Legal and Court Interpretation Service in Cook County, Illinois

    Get PDF
    The purpose of this study is to explore the feasibility of establishing a Legal and Court Interpreting Service, modeled upon Heartland Alliance's successful Medical Interpreting Services program. As conceptualized, the Legal and Court Interpreting Service would benefit Chicago's immigrant and refugee population in two ways: providing needed language interpretation services in the legal and court system of Cook County and offering employment opportunities for immigrants and refugees
    • …
    corecore