6 research outputs found

    Perceived autonomy of robots: effects of appearance and context

    No full text
    Due to advances in technology, the world around us contains an increasing number of robots, virtual agents, and other intelligent systems. These systems all have a certain degree of autonomy. For the people who interact with an intelligent system it is important to obtain a good understanding of its degree of autonomy: what tasks can the system perform autonomously and to what extent? In this paper we therefore present a study on how a system’s characteristics affect people’s perception of its autonomy. This was investigated by asking fire-fighters to rate the autonomy of a number of search and rescue robots in different shapes and situations. In this paper, we identify the following seven aspects of perceived autonomy: time interval of interaction, obedience, informativeness, task complexity, task implication, physical appearance, and physical distance to human operator. The study showed that increased disobedience, task complexity and physical distance of a robot can increase perceived autonomy

    Responsible innovation, development, and deployment of automated technology

    No full text
    A heated international debate is taking place about the innovation, development, and deployment of automated military technology, such as remotely-controlled aerial vehicles. Recently, the scope of the debate is extended to the moral concerns about (future) automated technology possibly able to make decisions about the application of kinetic force (e.g. fire a bullet) without human intervention. In this abstract, we will argue that it is hardly possible to have a discussion about the dangers of automated technology in general because automated technology is specialist in nature, capable of performing specific tasks within an, often, narrow context. Furthermore, we will argue that automated technology should be designed and developed in a way that supports responsible use from an early design stage all the way to its correct deployment

    Designing for responsibility: five desiderata of military robots

    No full text
    Recently, the use of military robots – which may be drones, unmanned aerial vehicles (UAVs), remotely piloted systems, autonomous weapon systems or ‘killer robots’ – has been debated in the media, politics, and academia. Military robots are increasingly automated, which means that they can perform tasks with decreased human involvement. On the one hand, this may lead to faster and better outcomes, but on the other hand, it raises the concern ‘Who is responsible for the (failed) actions of military robots?’. The issue becomes particularly stringent in the prospect of a future in which armies may deploy military robots that apply lethal force without human interference. In this abstract, we approach the responsibility question from an engineering perspective and suggest a solution that lies in the design of military robots. First, we would like to make a distinction between legal and moral responsibility. Legally, the person or organization deploying military robots, i.e. the army here, is responsible for their behavior, rather than the designer, programmer, manufacturer or the robot itself. The army’s legal responsibility, however, does not imply that it is in the position to take moral responsibility. In accordance with the Value Sensitive Design approach, we argue that the way technology is designed affects moral responsibility. For instance, most people will agree that in principle the person firing a gun, and not the manufacturer or the gun itself, should be held responsible for the consequences of a shot. In this case, the gun’s design supports moral responsibility. Acting responsible is harder, however, when you rely on a decision support system that is incomprehensible, or when you have to use a weapon that may fire accidentally. In these examples, the system’s design hinders moral responsibility. A gap between moral and legal responsibility is undesired. We, therefore, argue that military robots should be designed such that the army is in the position to take moral responsibility for the behavior of military robots. In other words, we have to design for responsibility

    Hybrid collective intelligence in a human–AI society

    Get PDF
    Within current debates about the future impact of Artificial Intelligence (AI) on human society, roughly three different perspectives can be recognised: (1) the technology-centric perspective, claiming that AI will soon outperform humankind in all areas, and that the primary threat for humankind is superintelligence; (2) the human-centric perspective, claiming that humans will always remain superior to AI when it comes to social and societal aspects, and that the main threat of AI is that humankind’s social nature is overlooked in technological designs; and (3) the collective intelligence-centric perspective, claiming that true intelligence lies in the collective of intelligent agents, both human and artificial, and that the main threat for humankind is that technological designs create problems at the collective, systemic level that are hard to oversee and control. The current paper offers the following contributions: (a) a clear description for each of the three perspectives, along with their history and background; (b) an analysis and interpretation of current applications of AI in human society according to each of the three perspectives, thereby disentangling miscommunication in the debate concerning threats of AI; and (c) a new integrated and comprehensive research design framework that addresses all aspects of the above three perspectives, and includes principles that support developers to reflect and anticipate upon potential effects of AI in society

    Hybrid collective intelligence in a human–AI society

    No full text
    Within current debates about the future impact of Artificial Intelligence (AI) on human society, roughly three different perspectives can be recognised: (1) the technology-centric perspective, claiming that AI will soon outperform humankind in all areas, and that the primary threat for humankind is superintelligence; (2) the human-centric perspective, claiming that humans will always remain superior to AI when it comes to social and societal aspects, and that the main threat of AI is that humankind’s social nature is overlooked in technological designs; and (3) the collective intelligence-centric perspective, claiming that true intelligence lies in the collective of intelligent agents, both human and artificial, and that the main threat for humankind is that technological designs create problems at the collective, systemic level that are hard to oversee and control. The current paper offers the following contributions: (a) a clear description for each of the three perspectives, along with their history and background; (b) an analysis and interpretation of current applications of AI in human society according to each of the three perspectives, thereby disentangling miscommunication in the debate concerning threats of AI; and (c) a new integrated and comprehensive research design framework that addresses all aspects of the above three perspectives, and includes principles that support developers to reflect and anticipate upon potential effects of AI in society.Accepted author manuscriptInteractive Intelligenc
    corecore