942 research outputs found

    Beyong lexical meaning : probabilistic models for sign language recognition

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Radiation Injury After a Nuclear Detonation: Medical Consequences and the Need for Scarce Resources Allocation

    Get PDF
    A 10-kiloton (kT) nuclear detonation within a US city could expose hundreds of thousands of people to radiation. The Scarce Resources for a Nuclear Detonation Project was undertaken to guide community planning and response in the aftermath of a nuclear detonation, when demand will greatly exceed available resources. This article reviews the pertinent literature on radiation injuries from human exposures and animal models to provide a foundation for the triage and management approaches outlined in this special issue. Whole-body doses \u3e2 Gy can produce clinically significant acute radiation syndrome (ARS), which classically involves the hematologic, gastrointestinal, cutaneous, and cardiovascular/central nervous systems. The severity and presentation of ARS are affected by several factors, including radiation dose and dose rate, interindividual variability in radiation response, type of radiation (eg, gamma alone, gamma plus neutrons), partial-body shielding, and possibly age, sex, and certain preexisting medical conditions. The combination of radiation with trauma, burns, or both (ie, combined injury) confers a worse prognosis than the same dose of radiation alone. Supportive care measures, including fluid support, antibiotics, and possibly myeloid cytokines (eg, granulocyte colony-stimulating factor), can improve the prognosis for some irradiated casualties. Finally, expert guidance and surge capacity for casualties with ARS are available from the Radiation Emergency Medical Management Web site and the Radiation Injury Treatment Network

    Cultural differences in the decoding and representation of facial expression signals

    Get PDF
    Summary. In this thesis, I will challenge one of the most fundamental assumptions of psychological science – the universality of facial expressions. I will do so by first reviewing the literature to reveal major flaws in the supporting arguments for universality. I will then present new data demonstrating how culture has shaped the decoding and transmission of facial expression signals. A summary of both sections are presented below. Review of the Literature To obtain a clear understanding of how the universality hypothesis developed, I will present the historical course of the emotion literature, reviewing relevant works supporting notions of a ‘universal language of emotion.’ Specifically, I will examine work on the recognition of facial expressions across cultures as it constitutes a main component of the evidence for universality. First, I will reveal that a number of ‘seminal’ works supporting the universality hypothesis are critically flawed, precluding them from further consideration. Secondly, by questioning the validity of the statistical criteria used to demonstrate ‘universal recognition,’ I will show that long-standing claims of universality are both misleading and unsubstantiated. On a related note, I will detail the creation of the ‘universal’ facial expression stimulus set (Facial Action Coding System -FACS- coded facial expressions) to reveal that it is in fact a biased, culture-specific representation of Western facial expressions of emotion. The implications for future cross-cultural work are discussed in relation to the limited FACS-coded stimulus set. Experimental Work In reviewing the literature, I will reveal a latent phenomenon which has so far remained unexplained – the East Asian (EA) recognition deficit. Specifically, EA observers consistently perform significantly poorer when categorising certain ‘universal’ facial expressions compared to Western Caucasian (WC) observers – a surprisingly neglected finding given the importance of emotion communication for human social interaction. To address this neglected issue, I examined both the decoding and transmission of facial expression signals in WC and EA observers. Experiment 1: Cultural Decoding of ‘Universal’ Facial Expressions of Emotion To examine the decoding of ‘universal’ facial expressions across cultures, I used eye tracking technology to record the eye movements of WC and EA observers while they categorised the 6 ‘universal’ facial expressions of emotion. My behavioural results demonstrate the robustness of the phenomenon by replicating the EA recognition deficit (i.e., EA observers are significantly poorer at recognizing facial expressions of ‘fear’ and ‘disgust’). Further inspection of the data also showed that EA observers systematically miscategorise ‘fear’ as ‘surprise’ and ‘disgust’ as ‘anger.’ Using spatio-temporal analyses of fixations, I will show that WC and EA observers use culture-specific fixation strategies to decode ‘universal’ facial expressions of emotion. Specifically, while WC observers distribute fixations across the face, sampling the eyes and mouth, EA observers persistently bias fixations towards the eyes and neglect critical features, especially for facial expressions eliciting significant confusion (i.e., ‘fear,’ ‘disgust,’ and ‘anger’). My behavioural data showed that EA observers systematically miscategorise ‘fear’ as ‘surprise’ and ‘disgust’ as ‘anger.’ Analysis of my eye movement data also showed that EA observers repetitively sample information from the eye region during facial expression decoding, particularly for those eliciting significant behavioural confusions (i.e., ‘fear,’ ‘disgust,’ and ‘anger’). To objectively examine whether the EA culture-specific fixation pattern could give rise to the reported behavioural confusions, I built a model observer that samples information from the face to categorise facial expressions. Using this model observer, I will show that the EA decoding strategy is inadequate to distinguish ‘fear’ from ‘surprise’ and ‘disgust’ from ‘anger,’ thus giving rise to the reported EA behavioural confusions. For the first time, I will reveal the origins of a latent phenomenon - the EA recognition deficit. I discuss the implications of culture-specific decoding strategies during facial expression categorization in light of current theories of cross-cultural emotion communication. Experiment 2: Cultural Internal Representations of Facial Expressions of Emotion In the previous two experiments, I presented data that questions the universality of facial expressions. As replicated in Experiment 1, WC and EA observers differ significantly in their recognition performance for certain ‘universal’ facial expressions. In Experiment 1, I showed culture-specific fixation patterns, demonstrating cultural differences in the predicted locations of diagnostic information. Together, these data predict cultural specificity in facial expression signals, supporting notions of cultural ‘accents’ and/or ‘dialects.’ To examine whether facial expression signals differ across cultures, I used a powerful reverse correlation (RC) technique to reveal the internal representations of the 6 ‘basic’ facial expressions of emotion in WC and EA observers. Using complementary statistical image processing techniques to examine the signal properties of each internal representation, I will directly reveal cultural specificity in the representations of the 6 ‘basic’ facial expressions of emotion. Specifically, I will show that while WC representations of facial expressions predominantly featured the eyebrows and mouth, EA representations were biased towards the eyes, as predicted by my eye movement data in Experiment 1. I will also show gaze avoidance as unique feature of the EA group. In sum, this data shows clear cultural contrasts in facial expression signals by showing that culture shapes the internal representations of emotion. Future Work My review of the literature will show that pivotal concepts such as ‘recognition’ and ‘universality’ are currently flawed and have misled both the interpretation of empirical work the direction of theoretical developments. Here, I will examine each concept in turn and propose more accurate criteria with which to demonstrate ‘universal recognition’ in future studies. In doing so, I will also detail possible future studies designed to address current gaps in knowledge created by use of inappropriate criteria. On a related note, having questioned the validity of FACS-coded facial expressions as ‘universal’ facial expressions, I will highlight an area for empirical development – the creation of a culturally valid facial expression stimulus set – and detail future work required to address this question. Finally, I will discuss broader areas of interest (i.e., lexical structure of emotion) which could elevate current knowledge of cross-cultural facial expression recognition and emotion communication in the future

    Language bias in visually driven decisions: Computational neurophysiological mechanisms

    Get PDF

    Windows into Sensory Integration and Rates in Language Processing: Insights from Signed and Spoken Languages

    Get PDF
    This dissertation explores the hypothesis that language processing proceeds in "windows" that correspond to representational units, where sensory signals are integrated according to time-scales that correspond to the rate of the input. To investigate universal mechanisms, a comparison of signed and spoken languages is necessary. Underlying the seemingly effortless process of language comprehension is the perceiver's knowledge about the rate at which linguistic form and meaning unfold in time and the ability to adapt to variations in the input. The vast body of work in this area has focused on speech perception, where the goal is to determine how linguistic information is recovered from acoustic signals. Testing some of these theories in the visual processing of American Sign Language (ASL) provides a unique opportunity to better understand how sign languages are processed and which aspects of speech perception models are in fact about language perception across modalities. The first part of the dissertation presents three psychophysical experiments investigating temporal integration windows in sign language perception by testing the intelligibility of locally time-reversed sentences. The findings demonstrate the contribution of modality for the time-scales of these windows, where signing is successively integrated over longer durations (~ 250-300 ms) than in speech (~ 50-60 ms), while also pointing to modality-independent mechanisms, where integration occurs in durations that correspond to the size of linguistic units. The second part of the dissertation focuses on production rates in sentences taken from natural conversations of English, Korean, and ASL. Data from word, sign, morpheme, and syllable rates suggest that while the rate of words and signs can vary from language to language, the relationship between the rate of syllables and morphemes is relatively consistent among these typologically diverse languages. The results from rates in ASL also complement the findings in perception experiments by confirming that time-scales at which phonological units fluctuate in production match the temporal integration windows in perception. These results are consistent with the hypothesis that there are modality-independent time pressures for language processing, and discussions provide a synthesis of converging findings from other domains of research and propose ideas for future investigations

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Towards Subject Independent Sign Language Recognition : A Segment-Based Probabilistic Approach

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    On Polysemy: A Philosophical, Psycholinguistic, and Computational Study

    Get PDF
    Most words in natural languages are polysemous, that is they have related but different meanings in different contexts. These polysemous meanings (senses) are marked by their structuredness, flexibility, productivity, and regularity. Previous theories have focused on some of these features but not all of them together. Thus, I propose a new theory of polysemy, which has two components. First, word meaning is actively modulated by broad contexts in a continuous fashion. Second, clustering arises from contextual modulations of a word and is then entrenched in our long term memory to facilitate future production and processing. Hence, polysemous senses are entrenched clusters in contextual modulation of word meaning and a word is polysemous if and only if it has entrenched clustering in its contextual modulation. I argue that this theory explains all the features of polysemous senses. In order to demonstrate more thoroughly how clusters emerge from meaning modulation during processing and provide evidence for this new theory, I implement the theory by training a recurrent neural network (RNN) that learns distributional information through exposure to a large corpus of English. Clusters of contextually modulated meanings emerge from how the model processes individual words in sentences. This trained model is validated against a human-annotated corpus of polysemy, focusing on the gradedness and flexibility of polysemous sense individuation, a human-annotated corpus of regular polysemy, focusing on the regularity of polysemy, and behavioral findings of offline sense relatedness ratings and online sentence processing. Last, the implication to philosophy of this new theory of polysemy is discussed. I focus on the debate between semantic minimalism and semantic contextualism. I argue that the phenomenon of polysemy poses a severe challenge to semantic minimalism. No solution is foreseeable if the minimalist thesis is kept, and the existence of contextual modulation is denied within the literal truth condition of an utterance
    corecore