334 research outputs found

    Description Theory, LTAGs and Underspecified Semantics

    Get PDF
    An attractive way to model the relation between an underspecified syntactic representation and its completions is to let the underspecified representation correspond to a logical description and the completions to the models of that description. This approach, which underlies the Description Theory of (Marcus et al. 1983) has been integrated in (Vijay-Shanker 1992) with a pure unification approach to Lexicalized Tree-Adjoining Grammars (Joshi et al.\ 1975, Schabes 1990). We generalize Description Theory by integrating semantic information, that is, we propose to tackle both syntactic and semantic underspecification using descriptions

    Realizing the Costs: Template-Based Surface Realisation in the GRAPH Approach to Referring Expression Generation

    Get PDF
    We describe a new realiser developed for the TUNA 2009 Challenge, and present its evaluation scores on the development set, showing a clear increase in performance compared to last year’s simple realiser

    NeuralREG: An end-to-end approach to referring expression generation

    Full text link
    Traditionally, Referring Expression Generation (REG) models first decide on the form and then on the content of references to discourse entities in text, typically relying on features such as salience and grammatical function. In this paper, we present a new approach (NeuralREG), relying on deep neural networks, which makes decisions about form and content in one go without explicit feature extraction. Using a delexicalized version of the WebNLG corpus, we show that the neural model substantially improves over two strong baselines. Data and models are publicly available.Comment: Accepted for presentation at ACL 201

    Cross-linguistic Attribute Selection for REG: Comparing Dutch and English

    Get PDF
    In this paper we describe a cross-linguistic experiment in attribute selection for referring expression generation. We used a graph-based attribute selection algorithm that was trained and cross-evaluated on English and Dutch data. The results indicate that attribute selection can be done in a largely language independent way

    On the Role of Visuals in Multimodal Answers to Medical Questions

    Get PDF
    This paper describes two experiments carried out in order to investigate the role of visuals in multimodal answer presentations for a medical question answering system. First, a production experiment was carried out to determine which modalities people choose to answer different types of questions. In this experiment, participants had to create (multimodal) presentations of answers to general medical questions. The collected answer presentations were coded on the presence of visual media (i.e., photos, graphics, and animations) and their function. The results indicated that participants presented the information in a multimodal way. Moreover, significant differences were found in the presentation of different answer and question types. Next, an evaluation experiment was conducted to investigate how users evaluate different types of multimodal answer presentations. In this second experiment, participants had\ud to assess the informativity and attractiveness of answer presentations for different types of medical questions. These answer presentations, originating from the production experiment, were manipulated in their answer length (brief vs. extended) and their type of picture (illustrative vs. informative). After the participants had assessed the answer presentations, they received a post-\ud test in which they had to indicate how much they had recalled from the presented answer presentations. The results showed that answer presentations with an informative picture were evaluated as more informative and more attractive than answer presentations with an illustrative picture. The results for the post-test tentatively indicated that learning from answer presentations with an informative picture leads to a better learning performance than learning from purely textual answer presentations

    Talking about Relations:Factors Influencing the Production of Relational Descriptions

    Get PDF
    In a production experiment (Experiment 1) and an acceptability rating one (Experiment 2), we assessed two factors, spatial position and salience, which may influence the production of relational descriptions (such as the ball between the man and the drawer). In Experiment 1, speakers were asked to refer unambiguously to a target object (a ball). In Experiment 1a, we addressed the role of spatial position, more specifically if speakers mention the entity positioned leftmost in the scene as (first) relatum. The results showed a preference to start with the left entity, however, only as a trend, which leaves room for other factors that could influence spatial reference. Thus, in the following studies, we varied salience systematically, by making one of the relatum candidates animate (Experiment 1b), and by adding attention capture cues, first subliminally by priming one relatum candidate with a flash (Experiment 1c), then explicitly by using salient colors for objects (Experiment 1d). Results indicate that spatial position played a dominant role. Entities on the left were mentioned more often as (first) relatum than those on the right (Experiment 1a, 1b, 1c, 1d). Animacy affected reference production in one out of three studies (in Experiment 1d). When salience was manipulated by priming visual attention or by using salient colors, there were no significant effects (Experiment 1c, 1d). In the acceptability rating study (Experiment 2), participants expressed their preference for specific relata, by ranking descriptions on the basis of how good they thought the descriptions fitted the scene. Results show that participants preferred most the description that had an animate entity as the first mentioned relatum. The relevance of these results for models of reference production is discussed
    corecore