1 research outputs found

    Toward a general logicist methodology for engineering ethically correct robots,”

    Get PDF
    Abstract It is hard to deny that robots will become increasingly capable, and that humans will increasingly exploit this capability by deploying them in ethically sensitive environments; i.e., in environments (e.g., hospitals) where ethically incorrect behavior on the part of robots could have dire effects on humans. But then how will we ensure that the robots in question always behave in an ethically correct manner? How can we know ahead of time, via rationales expressed in clear English (and/or other so-called natural languages), that they will so behave? How can we know in advance that their behavior will be constrained specifically by the ethical codes selected by human overseers? In general, it seems clear that one reply worth considering, put in encapsulated form, is this one: "By insisting that our robots only perform actions that can be proved ethically permissible in a human-selected deontic logic." (A deontic logic is simply a logic that formalizes an ethical code.) This approach ought to be explored for a number of reasons. One is that ethicists themselves work by rendering ethical theories and dilemmas in declarative form, and by reasoning over this declarative information using informal and/or formal logic. Other reasons in favor of pursuing the logicist solution are presented in the paper itself. To illustrate the feasibility of our methodology, we describe it in general terms free of any committment to particular systems, and show it solving a challenge regarding robot behavior in an intensive care unit
    corecore