33,105 research outputs found
Logic Programming and Machine Ethics
Transparency is a key requirement for ethical machines. Verified ethical
behavior is not enough to establish justified trust in autonomous intelligent
agents: it needs to be supported by the ability to explain decisions. Logic
Programming (LP) has a great potential for developing such perspective ethical
systems, as in fact logic rules are easily comprehensible by humans.
Furthermore, LP is able to model causality, which is crucial for ethical
decision making.Comment: In Proceedings ICLP 2020, arXiv:2009.09158. Invited paper for the
ICLP2020 Panel on "Machine Ethics". arXiv admin note: text overlap with
arXiv:1909.0825
A Logic-based Multi-agent System for Ethical Monitoring and Evaluation of Dialogues
In Proceedings ICLP 2021, arXiv:2109.0791
Can We Agree on What Robots Should be Allowed to Do? An Exercise in Rule Selection for Ethical Care Robots
Future Care Robots (CRs) should be able to balance a patient’s, often conflicting, rights without ongoing supervision. Many of the trade-offs faced by such a robot will require a degree of moral judgment. Some progress has been made on methods to guarantee robots comply with a predefined set of ethical rules. In contrast, methods for selecting these rules are lacking. Approaches departing from existing philosophical frameworks, often do not result in implementable robotic control rules. Machine learning approaches are sensitive to biases in the training data and suffer from opacity. Here, we propose an alternative, empirical, survey-based approach to rule selection. We suggest this approach has several advantages, including transparency and legitimacy. The major challenge for this approach, however, is that a workable solution, or social compromise, has to be found: it must be possible to obtain a consistent and agreed-upon set of rules to govern robotic behavior. In this article, we present an exercise in rule selection for a hypothetical CR to assess the feasibility of our approach. We assume the role of robot developers using a survey to evaluate which robot behavior potential users deem appropriate in a practically relevant setting, i.e., patient non-compliance. We evaluate whether it is possible to find such behaviors through a consensus. Assessing a set of potential robot behaviors, we surveyed the acceptability of robot actions that potentially violate a patient’s autonomy or privacy. Our data support the empirical approach as a promising and cost-effective way to query ethical intuitions, allowing us to select behavior for the hypothetical CR
Synchronous Online Philosophy Courses: An Experiment in Progress
There are two main ways to teach a course online: synchronously or asynchronously. In an asynchronous course, students can log on at their convenience and do the course work. In a synchronous course, there is a requirement that all students be online at specific times, to allow for a shared course environment. In this article, the author discusses the strengths and weaknesses of synchronous online learning for the teaching of undergraduate philosophy courses. The author discusses specific strategies and technologies he uses in the teaching of online philosophy courses. In particular, the author discusses how he uses videoconferencing to create a classroom-like environment in an online class
Artificial morality: Making of the artificial moral agents
Abstract:
Artificial Morality is a new, emerging interdisciplinary field that centres
around the idea of creating artificial moral agents, or AMAs, by implementing moral
competence in artificial systems. AMAs are ought to be autonomous agents capable of
socially correct judgements and ethically functional behaviour. This request for moral
machines comes from the changes in everyday practice, where artificial systems are being
frequently used in a variety of situations from home help and elderly care purposes to
banking and court algorithms. It is therefore important to create reliable and responsible
machines based on the same ethical principles that society demands from people. New
challenges in creating such agents appear. There are philosophical questions about a
machine’s potential to be an agent, or mora
l agent, in the first place. Then comes the
problem of social acceptance of such machines, regardless of their theoretic agency
status. As a result of efforts to resolve this problem, there are insinuations of needed
additional psychological (emotional and cogn
itive) competence in cold moral machines.
What makes this endeavour of developing AMAs even harder is the complexity of the
technical, engineering aspect of their creation. Implementation approaches such as top-
down, bottom-up and hybrid approach aim to find the best way of developing fully
moral agents, but they encounter their own problems throughout this effort
- …