43 research outputs found

    Towards conflictual narrative mechanics

    Get PDF
    We propose a five steps methodology to retrieve, reconstruct and analyse conflict related narratives in a standardized and automated way. Our methodology combines AI and network analysis techniques to build a visual representation of key agents and entities involved in a conflict and to characterize their relations. Unlike the majority of existing methods, ours can be applied to any type of conflict, as, through two data downloading phases, it first generates a bird’s-eye representation and then a fine-grained map of any conflict. Given the broad applicability of the proposed methodology, we believe that this work moves the first steps towards a better understanding of conflictual narrative mechanics

    An Ontological Model of User Preferences

    Full text link
    The notion of preferences plays an important role in many disciplines including service robotics which is concerned with scenarios in which robots interact with humans. These interactions can be favored by robots taking human preferences into account. This raises the issue of how preferences should be represented to support such preference-aware decision making. Several formal accounts for a notion of preferences exist. However, these approaches fall short on defining the nature and structure of the options that a robot has in a given situation. In this work, we thus investigate a formal model of preferences where options are non-atomic entities that are defined by the complex situations they bring about

    Do (and say) as I say: Linguistic adaptation in human-computer dialogs

    Get PDF
    © Theodora Koulouri, Stanislao Lauria, and Robert D. Macredie. This article has been made available through the Brunel Open Access Publishing Fund.There is strong research evidence showing that people naturally align to each other’s vocabulary, sentence structure, and acoustic features in dialog, yet little is known about how the alignment mechanism operates in the interaction between users and computer systems let alone how it may be exploited to improve the efficiency of the interaction. This article provides an account of lexical alignment in human–computer dialogs, based on empirical data collected in a simulated human–computer interaction scenario. The results indicate that alignment is present, resulting in the gradual reduction and stabilization of the vocabulary-in-use, and that it is also reciprocal. Further, the results suggest that when system and user errors occur, the development of alignment is temporarily disrupted and users tend to introduce novel words to the dialog. The results also indicate that alignment in human–computer interaction may have a strong strategic component and is used as a resource to compensate for less optimal (visually impoverished) interaction conditions. Moreover, lower alignment is associated with less successful interaction, as measured by user perceptions. The article distills the results of the study into design recommendations for human–computer dialog systems and uses them to outline a model of dialog management that supports and exploits alignment through mechanisms for in-use adaptation of the system’s grammar and lexicon

    How Entrainment Increases Dialogical Effectiveness ABSTRACT

    No full text
    Recent work on spoken and multimodal dialogue systems is aimed at more conversational and adaptive systems. We show that- in certain dialogical situations- it is important for such systems to adapt linguistically towards the users. We report ongoing work in addressing these tasks, describing the empirical experiments, investigating the feasibility of annotating the data reliably by human annotators as well as analyses of the collected data and results from a Wizard of Oz experiment. Additionally, entrainment, as such, is firstly examined in the domain of multimodal dialogical assistance systems
    corecore