3 research outputs found

    Ascribing consciousness to artificial intelligence: human-AI interaction and its carry-over effects on human-human interaction

    Get PDF
    The question of whether artificial intelligence (AI) can be considered conscious and therefore should be evaluated through a moral lens has surfaced in recent years. In this paper, we argue that whether AI is conscious is less of a concern than the fact that AI can be considered conscious by users during human-AI interaction, because this ascription of consciousness can lead to carry-over effects on human-human interaction. When AI is viewed as conscious like a human, then how people treat AI appears to carry over into how they treat other people due to activating schemas that are congruent to those activated during interactions with humans. In light of this potential, we might consider regulating how we treat AI, or how we build AI to evoke certain kinds of treatment from users, but not because AI is inherently sentient. This argument focuses on humanlike, social actor AI such as chatbots, digital voice assistants, and social robots. In the first part of the paper, we provide evidence for carry-over effects between perceptions of AI consciousness and behavior toward humans through literature on human-computer interaction, human-AI interaction, and the psychology of artificial agents. In the second part of the paper, we detail how the mechanism of schema activation can allow us to test consciousness perception as a driver of carry-over effects between human-AI interaction and human-human interaction. In essence, perceiving AI as conscious like a human, thereby activating congruent mind schemas during interaction, is a driver for behaviors and perceptions of AI that can carry over into how we treat humans. Therefore, the fact that people can ascribe humanlike consciousness to AI is worth considering, and moral protection for AI is also worth considering, regardless of AI’s inherent conscious or moral status

    Using Theory of Mind to assess users' sense of agency in social chatbots

    No full text
    The technological advancements in the field of chatbot research is booming. Despite this, it is still difficult to assess which social characteristics a chatbot needs to have for the user to interact with it as if it had a mind of its own. Review studies have highlighted that the main cause is the low number of research papers dedicated to this question, and the lack of a consistent protocol within the papers that do address it. In the current paper, we suggest the use of a Theory of Mind task to measure the implicit social behaviour users exhibit towards a text-based chatbot. We present preliminary findings suggesting that participants adapt towards this basic chatbot significantly more than when they conduct the task alone (p < .017). This task is quick to administer and does not require a second chatbot for comparison, making it an efficient universal task. With it, a database could be built with scores of all existing chatbots, allowing fast and efficient meta-analyses to discover which characteristics make the chatbot appear more 'human'
    corecore