Abstract. Today’s robots are used mainly as advanced tools and do not have any capability of taking moral responsibility. However, autonomous, learning intelligent systems are developing rapidly, resulting in a new division of tasks between humans and robots. The biggest worry about autonomous intelligent systems seems to be the fear of human loss of control and robots running amok. We argue that for all practical purposes, moral responsibility in autonomous intelligent system is best handled as a regulatory mechanism, with the aim to assure a desirable behavior. “Responsibility ” can thus be ascribed an intelligent artifact in much the same way as (artificial) “intelligence”. We simply expect a (morally) responsible artificial intelligent agent to behave in a way that is traditionally thought to require human (moral) responsibility. Technological artifacts are always a part of a broader socio-technological system with distributed responsibilities. The development of autonomous learning morally responsible intelligent systems must consequently rely on several responsibility feedback loops; the awareness and preparedness for handling risks on the side of designers, producers, implementers, users and maintenance personnel as well as the support of the society at large which will provide a feedback on the consequences of the use of robots, back to designers and producers. This complex system of shared responsibilities should secure safe functioning of the distributed responsibility systems including autonomous (morally) responsible intelligent robots (softbots)
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.