25 research outputs found

    Ethical trust and social moral norms simulation : a bio-inspired agent-based modelling approach

    Full text link
    The understanding of the micro-macro link is an urgent need in the study of social systems. The complex adaptive nature of social systems adds to the challenges of understanding social interactions and system feedback and presents substantial scope and potential for extending the frontiers of computer-based research tools such as simulations and agent-based technologies. In this project, we seek to understand key research questions concerning the interplay of ethical trust at the individual level and the development of collective social moral norms as representative sample of the bigger micro-macro link of social systems. We outline our computational model of ethical trust (CMET) informed by research findings from trust, machine ethics and neural science. Guided by the CMET architecture, we discuss key implementation ideas for the simulations of ethical trust and social moral norms

    A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents

    Get PDF
    Recently there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of Artificial General Intelligence, or AGI. Moral decision making is arguably one of the most challenging tasks for computational approaches to higher order cognition. The need for increasingly autonomous artificial agents to factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics or Friendly AI. In this paper we discuss how LIDA, an AGI model of human cognition, can be adapted to model both affective and rational features of moral decision making. Using the LIDA model we will demonstrate how moral decisions can be made in many domains using the same mechanisms that enable general decision making. Comprehensive models of human cognition typically aim for compatibility with recent research in the cognitive and neural sciences. Global Workspace Theory (GWT), proposed by the neuropsychologist Bernard Baars (1988), is a highly regarded model of human cognition that is currently being computationally instantiated in several software implementations. LIDA (Franklin et al. 2005) is one such computational implementation. LIDA is both a set of computational tools and an underlying model of human cognition, which provides mechanisms that are capable of explaining how an agent’s selection of its next action arises from bottom-up collection of sensory data and top-down processes for making sense of its current situation. We will describe how the LIDA model helps integrate emotions into the human decision making process, and elucidate a process whereby an agent can work through an ethical problem to reach a solution that takes account of ethically relevant factors

    A Step towards Medical Ethics Modeling

    Full text link

    From Games to Moral Agents: Towards a Model for Moral Actions

    Get PDF
    In order to be successfully integrated in our society, artificial moral agents need to know not only how to act in a moral scenario, but also how to identify the scenario first as being morally-relevant. This work looks at certain complex video games as simulations of artificial societies and studies the way in which morally-qualifiable actions are identified and assessed in them. Then, this analysis is used to distill a general formal model for moral actions aimed to be used as a first step towards identifying morally-qualifiable actions in the field of artificial morality. After discussing which elements are represented in this model, and how they are enhanced with respect to those already existing in the analyzed games, this work points out to some caveats that those games fail to address, and which would need to be tackled properly by artificial moral systems

    Machine ethics: stato dell’arte e osservazioni critiche

    Get PDF
    This essay aims to introduce the field of study known as machine ethics, viz. the aspect of the ethics of artificial intelligence concerned with the moral behavior of AI systems. I discuss the present potential of this technology and put forward some ethical considerations about the benefits and perils deriving from the development of this field of research. There is an urge of debate, considering that there is an increase in machine usage in ethically-sensitive domains, to ensure that future technology becomes advantageous for humans and their well-being.This essay aims to introduce the field of study known as machine ethics, viz. the aspect of the ethics of artificial intelligence concerned with the moral behavior of AI systems. I discuss the present potential of this technology and put forward some ethical considerations about the benefits and perils deriving from the development of this field of research. There is an urge of debate, considering that there is an increase in machine usage in ethically-sensitive domains, to ensure that future technology becomes advantageous for humans and their well-being

    Can Automated Smart-Homes increase Energy Efficiency and Grid Flexibility? - A Case Study of Stavanger, Norway investigating barriers and justice implications -

    Get PDF
    Artificial intelligence (AI) advocates deem it essential for the energy transition. Such a complex and penetrative set of technologies that impact everyday lives must be implemented cautiously. This thesis examines barriers to the diffusion of AI-based, automated smart homes at the household and industry scales. It examines an AI system that acts as an intermediary between households, electricity distribution companies and energy producers for domestic energy efficiency and grid flexibility. The thesis focuses on the ethical and justice implications of AI. It draws on a case study of Stavanger in Norway to investigate how AI can fairly enable energy efficiency and grid flexibility. The methods used include a small questionnaire survey, semi-structured interviews, and secondary research. Grounded theory is used to theorise barriers for households, qualitative content analysis identifies barriers for industry, and findings are also interpreted through an energy justice lens. The findings reveal multi-layered barriers and justice concerns related to the diffusion of automated smart-homes. The main barriers for households include functionality, saturation, and data management. For industry, barriers relate to economic, technical, regulatory, and market aspects. Justice and ethical implications linked with AI in the energy context are identified in terms of distributive, procedural and recognition streams of energy justice. The thesis argues that economic incentives, supportive policies, and an enabling market to involve actors are necessary to enable complex AI systems feasible for smart grids. For consumers, technologies must target a wide range of lifestyles and preferences for sufficient market saturation to make AI systems viable. Moreover, ethical AI requires a combination of regulations anchored in energy policies and the development and operationalisation of internal guidelines. The thesis concludes that while AI can aid transitions to low-carbon societies, failure to account for the humans involved and affected by its roll-out risks doing more harm than good

    Discrimination-aware data analysis for criminal intelligence

    Get PDF
    The growing use of Machine Learning (ML) algorithms in many application domains such as healthcare, business, education and criminal justice has evolved great promises as well challenges. ML pledges in proficiently analysing a large amount of data quickly and effectively by identifying patterns and providing insight into the data, which otherwise would have been impossible for a human to execute in this scale. However, the use of ML algorithms, in sensitive domains such as the Criminal Intelligence Analysis (CIA) system, demands extremely careful deployment. Data has an important impact in ML process. To understand the ethical and privacy issues related to data and ML, the VALCRI (Visual Analytics for sense-making in the CRiminal Intelligence analysis) system was used . VALCRI is a CIA system that integrated machine-learning techniques to improve the effectiveness of crime data analysis. At the most basic level, from our research, it was found that lack of harmonised interpretation of different privacy principles, trade-offs between competing ethical principles, and algorithmic opacity as concerning ethical and privacy issues among others. This research aims to alleviate these issues by investigating awareness of ethical and privacy issues related to data and ML. Document analysis and interviews were conducted to examine the way different privacy principles were understood in selected EU countries. The study takes a qualitative and quantitative research approach and is guided by various methods of analysis including interviews, observation, case study, experiment and legal document analysis. The findings of this research indicate that a lack of ethical awareness on data has an impact on ML outcome. Also, due to the opaque nature of the ML system, it is difficult to scrutinize and as a consequence, it leads to a lack of clarity in terms of how certain decisions were made. This thesis provides some novel solutions that can be used to tackle these issues

    Reinforcement Learning for Value Alignment

    Full text link
    [eng] As autonomous agents become increasingly sophisticated and we allow them to perform more complex tasks, it is of utmost importance to guarantee that they will act in alignment with human values. This problem has received in the AI literature the name of the value alignment problem. Current approaches apply reinforcement learning to align agents with values due to its recent successes at solving complex sequential decision-making problems. However, they follow an agent-centric approach by expecting that the agent applies the reinforcement learning algorithm correctly to learn an ethical behaviour, without formal guarantees that the learnt ethical behaviour will be ethical. This thesis proposes a novel environment-designer approach for solving the value alignment problem with theoretical guarantees. Our proposed environment-designer approach advances the state of the art with a process for designing ethical environments wherein it is in the agent's best interest to learn ethical behaviours. Our process specifies the ethical knowledge of a moral value in terms that can be used in a reinforcement learning context. Next, our process embeds this knowledge in the agent's learning environment to design an ethical learning environment. The resulting ethical environment incentivises the agent to learn an ethical behaviour while pursuing its own objective. We further contribute to the state of the art by providing a novel algorithm that, following our ethical environment design process, is formally guaranteed to create ethical environments. In other words, this algorithm guarantees that it is in the agent's best interest to learn value- aligned behaviours. We illustrate our algorithm by applying it in a case study environment wherein the agent is expected to learn to behave in alignment with the moral value of respect. In it, a conversational agent is in charge of doing surveys, and we expect it to ask the users questions respectfully while trying to get as much information as possible. In the designed ethical environment, results confirm our theoretical results: the agent learns an ethical behaviour while pursuing its individual objective.[cat] A mesura que els agents autònoms es tornen cada cop més sofisticats i els permetem realitzar tasques més complexes, és de la màxima importància garantir que actuaran d'acord amb els valors humans. Aquest problema ha rebut a la literatura d'IA el nom del problema d'alineació de valors. Els enfocaments actuals apliquen aprenentatge per reforç per alinear els agents amb els valors a causa dels seus èxits recents a l'hora de resoldre problemes complexos de presa de decisions seqüencials. Tanmateix, segueixen un enfocament centrat en l'agent en esperar que l'agent apliqui correctament l'algorisme d'aprenentatge de reforç per aprendre un comportament ètic, sense garanties formals que el comportament ètic après serà ètic. Aquesta tesi proposa un nou enfocament de dissenyador d'entorn per resoldre el problema d'alineació de valors amb garanties teòriques. El nostre enfocament de disseny d'entorns proposat avança l'estat de l'art amb un procés per dissenyar entorns ètics en què és del millor interès de l'agent aprendre comportaments ètics. El nostre procés especifica el coneixement ètic d'un valor moral en termes que es poden utilitzar en un context d'aprenentatge de reforç. A continuació, el nostre procés incorpora aquest coneixement a l'entorn d'aprenentatge de l'agent per dissenyar un entorn d'aprenentatge ètic. L'entorn ètic resultant incentiva l'agent a aprendre un comportament ètic mentre persegueix el seu propi objectiu. A més, contribuïm a l'estat de l'art proporcionant un algorisme nou que, seguint el nostre procés de disseny d'entorns ètics, està garantit formalment per crear entorns ètics. En altres paraules, aquest algorisme garanteix que és del millor interès de l'agent aprendre comportaments alineats amb valors. Il·lustrem el nostre algorisme aplicant-lo en un estudi de cas on s'espera que l'agent aprengui a comportar-se d'acord amb el valor moral del respecte. En ell, un agent de conversa s'encarrega de fer enquestes, i esperem que faci preguntes als usuaris amb respecte tot intentant obtenir la màxima informació possible. En l'entorn ètic dissenyat, els resultats confirmen els nostres resultats teòrics: l'agent aprèn un comportament ètic mentre persegueix el seu objectiu individual

    Roboterethik

    Get PDF
    Einleitung – Roboterethik als Bereichsethik Die Roboterethik sieht sich immer wieder mit zwei Vorwürfen konfrontiert, die ihren Status als Bereichsethik in Frage stellen: Zum einen habe sie keinen spezifischen Gegenstand, da sich Ethik nicht mit Unbelebtem beschäftige. Doch selbst wenn artifizielle Systeme zu Recht in den Fokus der ethischen Reflexion geraten würden, ließen sich – so der zweite Einwand – mit ihnen im Blick keine neuen, sondern in anderen ethischen Arenen längst formulierte un..
    corecore