26 research outputs found

    Metaphors are physical and abstract: ERPs to metaphorically modified nouns resemble ERPs to abstract language

    Get PDF
    Metaphorical expressions very often involve words referring to physical entities and experiences. Yet, figures of speech such as metaphors are not intended to be understood literally, word-by-word. We used event-related brain potentials (ERPs) to determine whether metaphorical expressions are processed more like physical or more like abstract expressions. To this end, novel adjective-noun word pairs were presented visually in three conditions: (1) Physical, easy to experience with the senses (e.g., printed schedule); (2) Abstract, difficult to experience with the senses (e.g., conditional schedule); and (3) novel Metaphorical, expressions with a physical adjective, but a figurative meaning (e.g., thin schedule). We replicated the N400 lexical concreteness effect for concrete versus abstract adjectives. In order to increase the sensitivity of the concreteness manipulation on the expressions, we divided each condition into high and low groups according to rated concreteness. Mirroring the adjective result, we observed a N400 concreteness effect at the noun for physical expressions with high concreteness ratings versus abstract expressions with low concreteness ratings, even though the nouns per se did not differ in lexical concreteness. Paradoxically, the N400 to nouns in the metaphorical expressions was indistinguishable from that to nouns in the literal abstract expressions, but only for the more concrete subgroup of metaphors; the N400 to the less concrete subgroup of metaphors patterned with that to nouns in the literal concrete expressions. In sum, we not only find evidence for conceptual concreteness separable from lexical concreteness but also that the processing of metaphorical expressions is not driven strictly by either lexical or conceptual concreteness

    Projectors, associators, visual imagery, and the time course of visual processing in grapheme-color synesthesia

    No full text
    In grapheme-color synesthesia, seeing particular letters or numbers evokes the experience of specific colors. We investigate the brain's real-time processing of words in this population by recording event-related brain potentials (ERPs) from 15 grapheme-color synesthetes and 15 controls as they judged the validity of word pairs ('yellow banana' vs. 'blue banana') presented under high and low visual contrast. Low contrast words elicited delayed P1/N170 visual ERP components in both groups, relative to high contrast. When color concepts were conveyed to synesthetes by individually tailored achromatic grapheme strings ('55555 banana'), visual contrast effects were like those in color words: P1/N170 components were delayed but unchanged in amplitude. When controls saw equivalent colored grapheme strings, visual contrast modulated P1/N170 amplitude but not latency. Color induction in synesthetes thus differs from color perception in controls. Independent from experimental effects, all orthographic stimuli elicited larger N170 and P2 in synesthetes than controls. While P2 (150-250ms) enhancement was similar in all synesthetes, N170 (130-210ms) amplitude varied with individual differences in synesthesia and visual imagery. Results suggest immediate cross-activation in visual areas processing color and shape is most pronounced in so-called projector synesthetes whose concurrent colors are experienced as originating in external space

    Additional Tennessee Eastman Process Simulation Data for Anomaly Detection Evaluation

    No full text
    User Agreement, Public Domain Dedication, and Disclaimer of Liability. By accessing or downloading the data or work provided here, you, the User, agree that you have read this agreement in full and agree to its terms. The person who owns, created, or contributed a work to the data or work provided here dedicated the work to the public domain and has waived his or her rights to the work worldwide under copyright law. You can copy, modify, distribute, and perform the work, for any lawful purpose, without asking permission. In no way are the patent or trademark rights of any person affected by this agreement, nor are the rights that any other person may have in the work or in how the work is used, such as publicity or privacy rights. Pacific Science & Engineering Group, Inc., its agents and assigns, make no warranties about the work and disclaim all liability for all uses of the work, to the fullest extent permitted by law. When you use or cite the work, you shall not imply endorsement by Pacific Science & Engineering Group, Inc., its agents or assigns, or by another author or affirmer of the work. This Agreement may be amended, and the use of the data or work shall be governed by the terms of the Agreement at the time that you access or download the data or work from this Website. Description This dataverse contains the data referenced in Rieth et al. (2017). Issues and Advances in Anomaly Detection Evaluation for Joint Human-Automated Systems. To be presented at Applied Human Factors and Ergonomics 2017. Each .RData file is an external representation of an R dataframe that can be read into an R environment with the 'load' function. The variables loaded are named ‘fault_free_training’, ‘fault_free_testing’, ‘faulty_testing’, and ‘faulty_training’, corresponding to the RData files. Each dataframe contains 55 columns: Column 1 ('faultNumber') ranges from 1 to 20 in the “Faulty” datasets and represents the fault type in the TEP. The “FaultFree” datasets only contain fault 0 (i.e. normal operating conditions). Column 2 ('simulationRun') ranges from 1 to 500 and represents a different random number generator state from which a full TEP dataset was generated (Note: the actual seeds used to generate training and testing datasets were non-overlapping). Column 3 ('sample') ranges either from 1 to 500 (“Training” datasets) or 1 to 960 (“Testing” datasets). The TEP variables (columns 4 to 55) were sampled every 3 minutes for a total duration of 25 hours and 48 hours respectively. Note that the faults were introduced 1 and 8 hours into the Faulty Training and Faulty Testing datasets, respectively. Columns 4 to 55 contain the process variables; the column names retain the original variable names. Acknowledgments. This work was sponsored by the Office of Naval Research, Human & Bioengineered Systems (ONR 341), program officer Dr. Jeffrey G. Morrison under contract N00014-15-C-5003. The views expressed are those of the authors and do not reflect the official policy or position of the Office of Naval Research, Department of Defense, or US Government
    corecore