6 research outputs found

    Reader responses to literary depictions of rape

    Get PDF
    This study explored reader responses to different literary depictions of rape. Four literary excerpts were used and divided as aesthetic versus nonaesthetic (style) and allusive versus explicit (detail). The general question was how readers would react to literary fragments depicting rape and whether the level of aesthetics and the level of explicitness influenced readers' thoughts and feelings. An open-ended question asked readers to report how the style had influenced their thoughts and feelings, whereas 7-point scales addressed the following variables: experienced distance, perceptions of realism and of beauty, emotional versus intellectual reaction, empathy, tension, and arousal. In a 2 (detail: explicit vs. allusive) × 2 (style: aesthetic vs. nonaesthetic) within-participant design (N = 34), gender functioned as a between-participants variable. Results indicate that the personal tendency to feel engaged with fiction overrides effects of aesthetics and explicitness. Principal-components factor analysis suggests that readers who are easily engaged with the characters feel unsettled when reading rape scenes they find brutal and intellectualize to handle these feelings. These “high empathizers” are not likely to be detached or to appreciate the fragment negatively: once absorbed, they will try to take something positive even from an unsettling experience

    Delexicalized Word Embeddings for Cross-lingual Dependency Parsing

    Get PDF
    International audienceThis paper presents a new approach to the problem of cross-lingual dependency parsing, aiming at leveraging training data from different source languages to learn a parser in a target language. Specifically , this approach first constructs word vector representations that exploit structural (i.e., dependency-based) contexts but only considering the morpho-syntactic information associated with each word and its contexts. These delexicalized word em-beddings, which can be trained on any set of languages and capture features shared across languages, are then used in combination with standard language-specific features to train a lexicalized parser in the target language. We evaluate our approach through experiments on a set of eight different languages that are part the Universal Dependencies Project. Our main results show that using such delexicalized embeddings, either trained in a monolin-gual or multilingual fashion, achieves significant improvements over monolingual baselines
    corecore