15 research outputs found
Ethical Risks Towards Artificial Intelligence in Digital-Art Creation
The use of modern technologies, including those related to artificial intelligence technologies, leads to some ethical problems. In connection with digital art, these problems are transformed into ethical risks associated with the three ethical dimensions of consequentialist ethics, deontological ethics, and virtue ethics. The article describes the ethical risks of artificial intelligence and ways to minimize them.Использование современных технологий, в том числе связанных с технологиями искусственного интеллекта, приводит к некоторым этическим проблемам. В связи с цифровым искусством эти проблемы трансформируются в этические риски, связанные с тремя этическими измерениями: последовательной этики, деонтологической этики и этики добродетели. В статье описаны этические риски искусственного интеллекта и способы их минимизации
Building Ethics into Artificial Intelligence
As artificial intelligence (AI) systems become increasingly ubiquitous, the
topic of AI governance for ethical decision-making by AI has captured public
imagination. Within the AI research community, this topic remains less familiar
to many researchers. In this paper, we complement existing surveys, which
largely focused on the psychological, social and legal discussions of the
topic, with an analysis of recent advances in technical solutions for AI
governance. By reviewing publications in leading AI conferences including AAAI,
AAMAS, ECAI and IJCAI, we propose a taxonomy which divides the field into four
areas: 1) exploring ethical dilemmas; 2) individual ethical decision
frameworks; 3) collective ethical decision frameworks; and 4) ethics in
human-AI interactions. We highlight the intuitions and key techniques used in
each approach, and discuss promising future research directions towards
successful integration of ethical AI systems into human societies
Logic Programming and Machine Ethics
Transparency is a key requirement for ethical machines. Verified ethical
behavior is not enough to establish justified trust in autonomous intelligent
agents: it needs to be supported by the ability to explain decisions. Logic
Programming (LP) has a great potential for developing such perspective ethical
systems, as in fact logic rules are easily comprehensible by humans.
Furthermore, LP is able to model causality, which is crucial for ethical
decision making.Comment: In Proceedings ICLP 2020, arXiv:2009.09158. Invited paper for the
ICLP2020 Panel on "Machine Ethics". arXiv admin note: text overlap with
arXiv:1909.0825
Landscape of Machine Implemented Ethics
This paper surveys the state-of-the-art in machine ethics, that is,
considerations of how to implement ethical behaviour in robots, unmanned
autonomous vehicles, or software systems. The emphasis is on covering the
breadth of ethical theories being considered by implementors, as well as the
implementation techniques being used. There is no consensus on which ethical
theory is best suited for any particular domain, nor is there any agreement on
which technique is best placed to implement a particular theory. Another
unresolved problem in these implementations of ethical theories is how to
objectively validate the implementations. The paper discusses the dilemmas
being used as validating 'whetstones' and whether any alternative validation
mechanism exists. Finally, it speculates that an intermediate step of creating
domain-specific ethics might be a possible stepping stone towards creating
machines that exhibit ethical behaviour.Comment: 25 page
Inteligência Artificial e sociedade: avanços e riscos
Este artigo tem como objetivo prover informações para que o leitor comum possa melhor entender os principais aspectos da IA, em que ela difere da computação convencional e como ela pode ser inserida nos processos organizacionais da sociedade humana. Além disso, busca evidenciar os grandes avanços e potenciais riscos que essa tecnologia, tal como qualquer outra, pode provocar caso os atores envolvidos na sua produção, utilização e regulação não criem um espaço de discussão adequado destas questões
Etikus AI: Javaslat az európai uniós megbízható AI-szabályozás hiányosságainak áthidalására és a gyakorlati implementáció támogatására
A GPT–3 2020-ban gondolkodó robotként definiálta magát. Az AI fejlődéstörténetét a gépek egyre intelligensebbé válásával azonosítják, hátterében ugyanakkor az emberi faktor áll, az emberi elme szárnyalása. A gépek etikájának kérdése ugyanakkor kulturális etikusság kérdés is. A szerző 7 iparágat érintően folytatott mélyinterjúk alapján feltárja: az AI-rendszerek fejlesztése során az etikus szempontokat egyelőre nem veszik figyelembe. A gyakorlati implementáció támogatás céljából a szerző az EU mesterséges intelligenciáról szóló jogszabálya és a megbízható mesterséges intelligenciára vonatkozó etikai irányelveinek összehasonlító elemzése alapján két hiányosságot azonosít: (1) az AI-rendszerfejlesztők és felügyelők etikai érzékenyítése, képzése; (2) a káros visszacsatolási hurkok és döntéshozatali torzulás javasolt kezelése. Áthidalásképpen a szerző 21 filozófus filozófiai és etikai örökségét kompaszként használva, javaslatot tesz az azonosított gapek és szervezeti integrációs hiányosságok áthidalására
Ethical AI: Proposal to bridge the gap in EU regulation on trustworthy AI and to support practical implementation of ethical perspectives
In 2020, GPT-3 defined itself as a thinking robot. The history of AI development is identified with machines becoming increasingly intelligent, but behind it lies the human factor, the soaring of the human mind. However, the question of machine ethics is also a question of cultural ethics. Based on in-depth interviews conducted in seven industries, the author reveals that ethical considerations are not yet taken into account in the development of AI systems. To support practical implementation, the author identifies two shortcomings based on a comparative analysis of the EU’s AI Act and Ethical Guidelines for Trustworthy AI: (1) missing ethical sensitisation and training of AI system developers and supervisors; (2) suggested approaches to handling harmful feedback loops and decision-making biases. The author uses the philosophical and ethical heritage of 21 philosophers as a compass to propose solutions for the identified gaps and deficiencies of organisational integration
Implementations in Machine Ethics: A Survey
Increasingly complex and autonomous systems require machine ethics to
maximize the benefits and minimize the risks to society arising from the new
technology. It is challenging to decide which type of ethical theory to employ
and how to implement it effectively. This survey provides a threefold
contribution. First, it introduces a trimorphic taxonomy to analyze machine
ethics implementations with respect to their object (ethical theories), as well
as their nontechnical and technical aspects. Second, an exhaustive selection
and description of relevant works is presented. Third, applying the new
taxonomy to the selected works, dominant research patterns, and lessons for the
field are identified, and future directions for research are suggested.Comment: published version, journal paper, ACM Computing Surveys, 38 pages, 7
tables, 4 figure