5 research outputs found
Bowdoin Orient v.135, no.1-25 (2005-2006)
https://digitalcommons.bowdoin.edu/bowdoinorient-2000s/1006/thumbnail.jp
Artificial Intelligence - Intelligent Art? Human-Machine Interaction and Creative Practice
As algorithmic data processing increasingly pervades everyday life, it is also making its way into the worlds of art, literature and music. In doing so, it shifts notions of creativity and evokes non-anthropocentric perspectives on artistic practice. This volume brings together contributions from the fields of cultural studies, literary studies, musicology and sound studies as well as media studies, sociology of technology, and beyond, presenting a truly interdisciplinary, state-of-the-art picture of the transformation of creative practice brought about by various forms of AI
Determining the effect of human cognitive biases in social robots for human-robotm interactions
The research presented in this thesis describes a model for aiding human-robot interactions based on
the principle of showing behaviours which are created based on 'human' cognitive biases by a robot in
human-robot interactions. The aim of this work is to study how cognitive biases can affect human-robot
interactions in the long term.
Currently, most human-robot interactions are based on a set of well-ordered and structured
rules, which repeat regardless of the person or social situation. This trend tends to provide an unrealistic
interaction, which can make difficult for humans to relate ‘naturally’ with the social robot after a number
of relations. The main focus of these interactions is that the social robot shows a very structured set of
behaviours and, as such, acts unnaturally and mechanical in terms of social interactions. On the other
hand, fallible behaviours (e.g. forgetfulness, inability to understand other’ emotions, bragging, blaming
others) are common behaviours in humans and can be seen in regular social interactions. Some of these
fallible behaviours are caused by the various cognitive biases. Researchers studied and developed
various humanlike skills (e.g. personality, emotions expressions, traits) in social robots to make their
behaviours more humanlike, and as a result, social robots can perform various humanlike actions, such
as walking, talking, gazing or emotional expression. But common human behaviours such as
forgetfulness, inability to understand other emotions, bragging or blaming are not present in the current
social robots; such behaviours which exist and influence people have not been explored in social robots.
The study presented in this thesis developed five cognitive biases in three different robots in
four separate experiments to understand the influences of such cognitive biases in human–robot
interactions. The results show that participants initially liked to interact with the robot with cognitive
biased behaviours more than the robot without such behaviours. In my first two experiments, the robots
(e.g., ERWIN, MyKeepon) interacted with the participants using a single bias (i.e., misattribution and
empathy gap) cognitive biases accordingly, and participants enjoyed the interactions using such bias
effects: for example, forgetfulness, source confusions, always showing exaggerated happiness or
sadness and so on in the robots. In my later experiments, participants interacted with the robot (e.g.,
MARC) three times, with a time interval between two interactions, and results show that the likeness
the interactions where the robot shows biased behaviours decreases less than the interactions where the
robot did not show any biased behaviours.
In the current thesis, I describe the investigations of these traits of forgetfulness, the inability
to understand others’ emotions, and bragging and blaming behaviours, which are influenced by
cognitive biases, and I also analyse people’s responses to robots displaying such biased behaviours in
human–robot interactions