8 research outputs found

    Judgment of the Humanness of an Interlocutor Is in the Eye of the Beholder

    Get PDF
    Despite tremendous advances in artificial language synthesis, no machine has so far succeeded in deceiving a human. Most research focused on analyzing the behavior of “good” machine. We here choose an opposite strategy, by analyzing the behavior of “bad” humans, i.e., humans perceived as machine. The Loebner Prize in Artificial Intelligence features humans and artificial agents trying to convince judges on their humanness via computer-mediated communication. Using this setting as a model, we investigated here whether the linguistic behavior of human subjects perceived as non-human would enable us to identify some of the core parameters involved in the judgment of an agents' humanness. We analyzed descriptive and semantic aspects of dialogues in which subjects succeeded or failed to convince judges of their humanness. Using cognitive and emotional dimensions in a global behavioral characterization, we demonstrate important differences in the patterns of behavioral expressiveness of the judges whether they perceived their interlocutor as being human or machine. Furthermore, the indicators of interest displayed by the judges were predictive of the final judgment of humanness. Thus, we show that the judgment of an interlocutor's humanness during a social interaction depends not only on his behavior, but also on the judge himself. Our results thus demonstrate that the judgment of humanness is in the eye of the beholder

    Why People Use Chatbots

    Get PDF
    There is a growing interest in chatbots, which are machine agents serving as natural language user interfaces for data and service providers. However, no studies have empirically investigated people’s motivations for using chatbots. In this study, an online questionnaire asked chatbot users (N = 146, aged 16–55 years) from the US to report their reasons for using chatbots. The study identifies key motivational factors driving chatbot use. The most frequently reported motivational factor is “productivity”; chatbots help users to obtain timely and efficient assistance or information. Chatbot users also reported motivations pertaining to entertainment, social and relational factors, and curiosity about what they view as a novel phenomenon. The findings are discussed in terms of the uses and gratifications theory, and they provide insight into why people choose to interact with automated agents online. The findings can help developers facilitate better human–chatbot interaction experiences in the future. Possible design guidelines are suggested, reflecting different chatbot user motivations.acceptedVersio

    Implicit causality bias in English: a corpus of 300 verbs

    No full text
    This study provides implicit verb causality norms for a corpus of 305 English verbs. A web-based sentence completion study was conducted, with 96 respondents completing fragments such as “John liked Mary because...” The resulting bias scores are provided as supplementary material in the Psychonomic Society Archive, where we also present lexical and semantic verb features, such as the frequency, semantic class and emotional valence. Our results replicate those of previous studies with much smaller numbers of verbs and respondents. Novel effects of gender and its interaction with verb valence illustrate the type of issues that can be investigated using stable norms for a large number of verbs. The corpus will facilitate future studies in a range of areas, including psycholinguistics and social psycholog
    corecore