8 research outputs found

    Trusting Intelligent Machines: Deepening Trust Within Socio-Technical Systems

    Get PDF
    Intelligent machines have reached capabilities that go beyond a level that a human being can fully comprehend without sufficiently detailed understanding of the underlying mechanisms. The choice of moves in the game Go (generated by Deep Mind?s Alpha Go Zero [1]) are an impressive example of an artificial intelligence system calculating results that even a human expert for the game can hardly retrace [2]. But this is, quite literally, a toy example. In reality, intelligent algorithms are encroaching more and more into our everyday lives, be it through algorithms that recommend products for us to buy, or whole systems such as driverless vehicles. We are delegating ever more aspects of our daily routines to machines, and this trend looks set to continue in the future. Indeed, continued economic growth is set to depend on it. The nature of human-computer interaction in the world that the digital transformation is creating will require (mutual) trust between humans and intelligent, or seemingly intelligent, machines. But what does it mean to trust an intelligent machine? How can trust be established between human societies and intelligent machines

    Artificial social constructivism for long term human computer interaction

    No full text
    Connected devices, like smart speakers or autonomous vehicles, are becoming more common within society. As interactions with these devices increase, the likelihood of encountering errors will also increase. Errors are inevitable in any system but can be unexpected by human users and therefore can lead to trust breakdowns. This thesis proposes a new socio-technical system theory based on social constructivism which would encourage human users to continue using smart devices after unexpected errors occur and recover trust. The theory, called Artificial Social Constructivism (ASC), hypothesises that mutual education of the human and computer agents creates a relationship between them that allows norms and values to be created and maintained within the system, even in the face of errors. Nine online experiments were conducted with a total of 4771 unique participants to investigate the computational viability of ASC. The experiments were framed as coordination games between a human and an artificial intelligence (AI) player. Participants undertook training then played a game with the AI player. During the game they encountered an unexpected error. While the type of training did not reduce negative feedback, undertaking any form of training changed participants’ attitudes and responses. Participants who undertook training blamed themselves more than the AI player for the unexpected error and increasingly blamed themselves as the task difficulty increased. Participants who undertook training were additionally 1.5 times more likely than to change their responses to align with the AI player’s when considering norms and 2.5 times more likely when it came to values. The experimental results supported the concept that an element of education caused participants to blame the AI player less for errors. ASC could therefore be implemented as a computational model. However, it may be necessary to address users’ preconceived expectations of AI beforehand to prevent unethical applications of the theory.Open Acces

    Ethics and System Design in a New Era of Human-Computer Interaction [Guest Editorial]

    No full text

    Odorveillance and the Ethics of Robotic Olfaction [Opinion]

    No full text
    corecore