Judgment of the humanness of an interlocutor is in the eye of the beholder.

Abstract

Despite tremendous advances in artificial language synthesis, no machine has so far succeeded in deceiving a human. Most research focused on analyzing the behavior of "good" machine. We here choose an opposite strategy, by analyzing the behavior of "bad" humans, i.e., humans perceived as machine. The Loebner Prize in Artificial Intelligence features humans and artificial agents trying to convince judges on their humanness via computer-mediated communication. Using this setting as a model, we investigated here whether the linguistic behavior of human subjects perceived as non-human would enable us to identify some of the core parameters involved in the judgment of an agents' humanness. We analyzed descriptive and semantic aspects of dialogues in which subjects succeeded or failed to convince judges of their humanness. Using cognitive and emotional dimensions in a global behavioral characterization, we demonstrate important differences in the patterns of behavioral expressiveness of the judges whether they perceived their interlocutor as being human or machine. Furthermore, the indicators of interest displayed by the judges were predictive of the final judgment of humanness. Thus, we show that the judgment of an interlocutor's humanness during a social interaction depends not only on his behavior, but also on the judge himself. Our results thus demonstrate that the judgment of humanness is in the eye of the beholder

Similar works

Full text

thumbnail-image

Directory of Open Access Journals

redirect
Last time updated on 09/08/2016

This paper was published in Directory of Open Access Journals.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.