7,486 research outputs found
Motivations, Values and Emotions: 3 sides of the same coin
This position paper speaks to the interrelationships between the three concepts of motivations, values, and emotion. Motivations prime actions, values serve to choose between motivations, emotions provide a common currency for values, and emotions implement motivations. While conceptually distinct, the three are so pragmatically intertwined as to differ primarily from our taking different points of view. To make these points more transparent, we briefly describe the three in the context a cognitive architecture, the LIDA model, for software agents and robots that models human cognition, including a developmental period. We also compare the LIDA model with other models of cognition, some involving learning and emotions. Finally, we conclude that artificial emotions will prove most valuable as implementers of motivations in situations requiring learning and development
Assistive robotics: research challenges and ethics education initiatives
Assistive robotics is a fast growing field aimed at helping healthcarers in hospitals, rehabilitation centers and nursery homes, as well as empowering people with reduced mobility at home, so that they can autonomously fulfill their daily living activities. The need to function in dynamic human-centered environments poses new research challenges: robotic assistants need to have friendly interfaces, be highly adaptable and customizable, very compliant and intrinsically safe to people, as well as able to handle deformable materials.
Besides technical challenges, assistive robotics raises also ethical defies, which have led to the emergence of a new discipline: Roboethics. Several institutions are developing regulations and standards, and many ethics education initiatives include contents on human-robot interaction and human dignity in assistive situations.
In this paper, the state of the art in assistive robotics is briefly reviewed, and educational materials from a university course on Ethics in Social Robotics and AI focusing on the assistive context are presented.Peer ReviewedPostprint (author's final draft
Equal Rights for Zombies?: Phenomenal Consciousness and Responsible Agency
Intuitively, moral responsibility requires conscious awareness of what one is doing, and why one is doing it, but what kind of awareness is at issue? Neil Levy argues that phenomenal consciousness—the qualitative feel of conscious sensations—is entirely unnecessary for moral responsibility. He claims that only access consciousness—the state in which information (e.g., from perception or memory) is available to an array of mental systems (e.g., such that an agent can deliberate and act upon that information)—is relevant to moral responsibility. I argue that numerous ethical, epistemic, and neuroscientific considerations entail that the capacity for phenomenal consciousness is necessary for moral responsibility. I focus in particular on considerations inspired by P. F. Strawson, who puts a range of qualitative moral emotions—the reactive attitudes—front and center in the analysis of moral responsibility
Taking Turing by Surprise? Designing Digital Computers for morally-loaded contexts
There is much to learn from what Turing hastily dismissed as Lady Lovelace s
objection. Digital computers can indeed surprise us. Just like a piece of art,
algorithms can be designed in such a way as to lead us to question our
understanding of the world, or our place within it. Some humans do lose the
capacity to be surprised in that way. It might be fear, or it might be the
comfort of ideological certainties. As lazy normative animals, we do need to be
able to rely on authorities to simplify our reasoning: that is ok. Yet the
growing sophistication of systems designed to free us from the constraints of
normative engagement may take us past a point of no-return. What if, through
lack of normative exercise, our moral muscles became so atrophied as to leave
us unable to question our social practices? This paper makes two distinct
normative claims:
1. Decision-support systems should be designed with a view to regularly
jolting us out of our moral torpor.
2. Without the depth of habit to somatically anchor model certainty, a
computer s experience of something new is very different from that which in
humans gives rise to non-trivial surprises. This asymmetry has key
repercussions when it comes to the shape of ethical agency in artificial moral
agents. The worry is not just that they would be likely to leap morally ahead
of us, unencumbered by habits. The main reason to doubt that the moral
trajectories of humans v. autonomous systems might remain compatible stems from
the asymmetry in the mechanisms underlying moral change. Whereas in humans
surprises will continue to play an important role in waking us to the need for
moral change, cognitive processes will rule when it comes to machines. This
asymmetry will translate into increasingly different moral outlooks, to the
point of likely unintelligibility. The latter prospect is enough to doubt the
desirability of autonomous moral agents
Equal Rights for Zombies?: Phenomenal Consciousness and Responsible Agency
Intuitively, moral responsibility requires conscious awareness of what one is doing, and why one is doing it, but what kind of awareness is at issue? Neil Levy argues that phenomenal consciousness—the qualitative feel of conscious sensations—is entirely unnecessary for moral responsibility. He claims that only access consciousness—the state in which information (e.g., from perception or memory) is available to an array of mental systems (e.g., such that an agent can deliberate and act upon that information)—is relevant to moral responsibility. I argue that numerous ethical, epistemic, and neuroscientific considerations entail that the capacity for phenomenal consciousness is necessary for moral responsibility. I focus in particular on considerations inspired by P. F. Strawson, who puts a range of qualitative moral emotions—the reactive attitudes—front and center in the analysis of moral responsibility
- …