12 research outputs found
Implementations in Machine Ethics: A Survey
Increasingly complex and autonomous systems require machine ethics to maximize the benefits and minimize the risks to society arising from the new technology. It is challenging to decide which type of ethical theory to employ and how to implement it effectively. This survey provides a threefold contribution. First, it introduces a trimorphic taxonomy to analyze machine ethics implementations with respect to their object (ethical theories), as well as their nontechnical and technical aspects. Second, an exhaustive selection and description of relevant works is presented. Third, applying the new taxonomy to the selected works, dominant research patterns, and lessons for the field are identified, and future directions for research are suggested
Machine ethics via logic programming
Machine ethics is an interdisciplinary field of inquiry that emerges from the need of
imbuing autonomous agents with the capacity of moral decision-making. While some
approaches provide implementations in Logic Programming (LP) systems, they have not
exploited LP-based reasoning features that appear essential for moral reasoning.
This PhD thesis aims at investigating further the appropriateness of LP, notably a
combination of LP-based reasoning features, including techniques available in LP systems,
to machine ethics. Moral facets, as studied in moral philosophy and psychology, that
are amenable to computational modeling are identified, and mapped to appropriate LP
concepts for representing and reasoning about them.
The main contributions of the thesis are twofold.
First, novel approaches are proposed for employing tabling in contextual abduction
and updating – individually and combined – plus a LP approach of counterfactual reasoning; the latter being implemented on top of the aforementioned combined abduction and updating technique with tabling. They are all important to model various issues of the aforementioned moral facets.
Second, a variety of LP-based reasoning features are applied to model the identified
moral facets, through moral examples taken off-the-shelf from the morality literature.
These applications include: (1) Modeling moral permissibility according to the Doctrines of Double Effect (DDE) and Triple Effect (DTE), demonstrating deontological and utilitarian judgments via integrity constraints (in abduction) and preferences over abductive scenarios; (2) Modeling moral reasoning under uncertainty of actions, via abduction and probabilistic LP; (3) Modeling moral updating (that allows other – possibly overriding – moral rules to be adopted by an agent, on top of those it currently follows) via the integration of tabling in contextual abduction and updating; and (4) Modeling moral permissibility and its justification via counterfactuals, where counterfactuals are used for formulating DDE.Fundação para a Ciência e a Tecnologia (FCT)-grant SFRH/BD/72795/2010 ; CENTRIA
and DI/FCT/UNL for the supplementary fundin
Principais dilemas éticos das novas tecnologias de informação : survey teórico exploratório
Mestrado em Gestão de Recursos HumanosCom o evoluir dos tempos temos assistido a um desenvolvimento tecnológico histórico, cujo avanço propicia a abertura de vários caminhos, a partilha de informação à velocidade da luz, bem como a formação e transformação de novos conceitos. O ritmo célere do progresso tecnológico e cientÃfico que se tem feito sentir oferece oportunidades para o futuro, ao mesmo tempo que nos confronta com novas questões e dilemas éticos.
O objetivo deste estudo é identificar os dilemas éticos que existem nas novas tecnologias de informação. Para tal foi realizada uma análise acerca da definição de ética e dilemas éticos de acordo com a posição de vários autores e, de seguida, abordou-se as dimensões da técnica, inteligência artificial e Big Data relativamente aos desafios éticos que se colocam hoje em dia nas sociedades modernas.
Este estudo é um survey teórico exploratório realizado através do levantamento de uma pesquisa bibliográfica documental, assim como o levantamento de artigos da base de dados Scopus, entre 2016 e 2019.
Por fim, comparam-se os dilemas éticos identificados na técnica, inteligência artificial e Big Data.Over the years we have witnessed an historical technological development, work who leads to the opening of new paths, the sharing of information at the speed of light all around the world, as well as the formation of new concepts. The fast pace of the technological and scientific process that has been felt offers new opportunities for the future, at the same time confronts us with new and challenging and ethical dilemmas.
The goal of this study is to understand what are the ethical dilemmas that exist in the new information technologies and, for that, an analysis was made about the definition of ethics and ethical dilemmas, based on the opinion of several authors and, after that the analysis touched on the technique, artificial intelligence and Big Data relative to the ethical challenges that they put on modern day society.
The analysis was made through an exploratory theoretical survey through a bibliographical research and the identification of articles from Scopus between 2016 and 2019.
The conclusion is made comparing the ethical dilemmas identified in technique, artificial intelligence and Big Data.info:eu-repo/semantics/publishedVersio
Every normal logic program has a 2-valued semantics: theory, extensions, applications, implementations
Trabalho apresentado no âmbito do Doutoramento em Informática, como requisito parcial para obtenção do grau de Doutor em InformáticaAfter a very brief introduction to the general subject of Knowledge Representation and Reasoning with Logic Programs we analyse the syntactic structure of a logic program and how it can influence the semantics. We outline the important properties of a 2-valued semantics for Normal Logic Programs, proceed to define the new Minimal Hypotheses semantics with those properties and explore how it can be used to benefit some knowledge representation and reasoning mechanisms.
The main original contributions of this work, whose connections will be detailed in
the sequel, are:
• The Layering for generic graphs which we then apply to NLPs yielding the Rule
Layering and Atom Layering — a generalization of the stratification notion;
• The Full shifting transformation of Disjunctive Logic Programs into (highly nonstratified)NLPs;
• The Layer Support — a generalization of the classical notion of support;
• The Brave Relevance and Brave Cautious Monotony properties of a 2-valued semantics;
• The notions of Relevant Partial Knowledge Answer to a Query and Locally Consistent
Relevant Partial Knowledge Answer to a Query;
• The Layer-Decomposable Semantics family — the family of semantics that reflect
the above mentioned Layerings;
• The Approved Models argumentation approach to semantics;
• The Minimal Hypotheses 2-valued semantics for NLP — a member of the Layer-Decomposable Semantics family rooted on a minimization of positive hypotheses assumption approach;
• The definition and implementation of the Answer Completion mechanism in XSB
Prolog — an essential component to ensure XSB’s WAM full compliance with the
Well-Founded Semantics;
• The definition of the Inspection Points mechanism for Abductive Logic Programs;• An implementation of the Inspection Points workings within the Abdual system [21]
We recommend reading the chapters in this thesis in the sequence they appear. However,
if the reader is not interested in all the subjects, or is more keen on some topics
rather than others, we provide alternative reading paths as shown below.
1-2-3-4-5-6-7-8-9-12 Definition of the Layer-Decomposable Semantics family and the Minimal Hypotheses semantics (1 and 2 are optional)
3-6-7-8-10-11-12 All main contributions – assumes the reader
is familiarized with logic programming topics
3-4-5-10-11-12 Focus on abductive reasoning and applications.FCT-MCTES (Fundação para a Ciência e Tecnologia do Ministério da Ciência,Tecnologia e Ensino Superior)- (no. SFRH/BD/28761/2006
Can Machines Learn Morality? The Delphi Experiment
As AI systems become increasingly powerful and pervasive, there are growing
concerns about machines' morality or a lack thereof. Yet, teaching morality to
machines is a formidable task, as morality remains among the most intensely
debated questions in humanity, let alone for AI. Existing AI systems deployed
to millions of users, however, are already making decisions loaded with moral
implications, which poses a seemingly impossible challenge: teaching machines
moral sense, while humanity continues to grapple with it.
To explore this challenge, we introduce Delphi, an experimental framework
based on deep neural networks trained directly to reason about descriptive
ethical judgments, e.g., "helping a friend" is generally good, while "helping a
friend spread fake news" is not. Empirical results shed novel insights on the
promises and limits of machine ethics; Delphi demonstrates strong
generalization capabilities in the face of novel ethical situations, while
off-the-shelf neural network models exhibit markedly poor judgment including
unjust biases, confirming the need for explicitly teaching machines moral
sense.
Yet, Delphi is not perfect, exhibiting susceptibility to pervasive biases and
inconsistencies. Despite that, we demonstrate positive use cases of imperfect
Delphi, including using it as a component model within other imperfect AI
systems. Importantly, we interpret the operationalization of Delphi in light of
prominent ethical theories, which leads us to important future research
questions
Adapting a Kidney Exchange Algorithm to Align with Human Values
The efficient and fair allocation of limited resources is a classical problem
in economics and computer science. In kidney exchanges, a central market maker
allocates living kidney donors to patients in need of an organ. Patients and
donors in kidney exchanges are prioritized using ad-hoc weights decided on by
committee and then fed into an allocation algorithm that determines who gets
what--and who does not. In this paper, we provide an end-to-end methodology for
estimating weights of individual participant profiles in a kidney exchange. We
first elicit from human subjects a list of patient attributes they consider
acceptable for the purpose of prioritizing patients (e.g., medical
characteristics, lifestyle choices, and so on). Then, we ask subjects
comparison queries between patient profiles and estimate weights in a
principled way from their responses. We show how to use these weights in kidney
exchange market clearing algorithms. We then evaluate the impact of the weights
in simulations and find that the precise numerical values of the weights we
computed matter little, other than the ordering of profiles that they imply.
However, compared to not prioritizing patients at all, there is a significant
effect, with certain classes of patients being (de)prioritized based on the
human-elicited value judgments
Normative Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions
The rapid adoption of artificial intelligence (AI) necessitates careful
analysis of its ethical implications. In addressing ethics and fairness
implications, it is important to examine the whole range of ethically relevant
features rather than looking at individual agents alone. This can be
accomplished by shifting perspective to the systems in which agents are
embedded, which is encapsulated in the macro ethics of sociotechnical systems
(STS). Through the lens of macro ethics, the governance of systems - which is
where participants try to promote outcomes and norms which reflect their values
- is key. However, multiple-user social dilemmas arise in an STS when
stakeholders of the STS have different value preferences or when norms in the
STS conflict. To develop equitable governance which meets the needs of
different stakeholders, and resolve these dilemmas in satisfactory ways with a
higher goal of fairness, we need to integrate a variety of normative ethical
principles in reasoning. Normative ethical principles are understood as
operationalizable rules inferred from philosophical theories. A taxonomy of
ethical principles is thus beneficial to enable practitioners to utilise them
in reasoning.
This work develops a taxonomy of normative ethical principles which can be
operationalized in the governance of STS. We identify an array of ethical
principles, with 25 nodes on the taxonomy tree. We describe the ways in which
each principle has previously been operationalized, and suggest how the
operationalization of principles may be applied to the macro ethics of STS. We
further explain potential difficulties that may arise with each principle. We
envision this taxonomy will facilitate the development of methodologies to
incorporate ethical principles in reasoning capacities for governing equitable
STS