103,044 research outputs found
Building Ethics into Artificial Intelligence
As artificial intelligence (AI) systems become increasingly ubiquitous, the
topic of AI governance for ethical decision-making by AI has captured public
imagination. Within the AI research community, this topic remains less familiar
to many researchers. In this paper, we complement existing surveys, which
largely focused on the psychological, social and legal discussions of the
topic, with an analysis of recent advances in technical solutions for AI
governance. By reviewing publications in leading AI conferences including AAAI,
AAMAS, ECAI and IJCAI, we propose a taxonomy which divides the field into four
areas: 1) exploring ethical dilemmas; 2) individual ethical decision
frameworks; 3) collective ethical decision frameworks; and 4) ethics in
human-AI interactions. We highlight the intuitions and key techniques used in
each approach, and discuss promising future research directions towards
successful integration of ethical AI systems into human societies
Post-Ethnic Humanistic Care in Chinese American Science Fictions
In recent years, Chinese American science fiction shows the integration of science, literature and humanistic care into an organic whole. Chinese American science fiction writers combine the elements of science and technology with the realistic social problems in their works to expose the dilemmas between the development of science and technology and human society, thus the issues of artificial intelligence ethics, technological alienation and the rights and interests of the marginal group have gradually become the central concern in Chinese American science fictions. In essence, the focus of Chinese American science fiction writers transcends ethnic barriers and shows a kind of post-ethnic universal humanistic care, which has a positive and practical significance for building a new world order of harmony and fraternity
A Case for Machine Ethics in Modeling Human-Level Intelligent Agents
This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts
Building Ethically Bounded AI
The more AI agents are deployed in scenarios with possibly unexpected
situations, the more they need to be flexible, adaptive, and creative in
achieving the goal we have given them. Thus, a certain level of freedom to
choose the best path to the goal is inherent in making AI robust and flexible
enough. At the same time, however, the pervasive deployment of AI in our life,
whether AI is autonomous or collaborating with humans, raises several ethical
challenges. AI agents should be aware and follow appropriate ethical principles
and should thus exhibit properties such as fairness or other virtues. These
ethical principles should define the boundaries of AI's freedom and creativity.
However, it is still a challenge to understand how to specify and reason with
ethical boundaries in AI agents and how to combine them appropriately with
subjective preferences and goal specifications. Some initial attempts employ
either a data-driven example-based approach for both, or a symbolic rule-based
approach for both. We envision a modular approach where any AI technique can be
used for any of these essential ingredients in decision making or decision
support systems, paired with a contextual approach to define their combination
and relative weight. In a world where neither humans nor AI systems work in
isolation, but are tightly interconnected, e.g., the Internet of Things, we
also envision a compositional approach to building ethically bounded AI, where
the ethical properties of each component can be fruitfully exploited to derive
those of the overall system. In this paper we define and motivate the notion of
ethically-bounded AI, we describe two concrete examples, and we outline some
outstanding challenges.Comment: Published at AAAI Blue Sky Track, winner of Blue Sky Awar
Philosophical Signposts for Artificial Moral Agent Frameworks
This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral Agents may consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency. At the very least, the said philosophical concepts may be treated as signposts for further research on how to truly account for the nature of Artificial Moral Agents
Recommended from our members
Ethics in AIED: Who cares?
The field of AIED raises far-reaching ethical questions with important implications for students and educators. However, most AIED research, development and deployment has taken place in what is essentially a moral vacuum (for example, what happens if a child is subjected to a biased set of algorithms that impact negatively and incorrectly on their school progress?). Around the world, virtually no research has been undertaken, no guidelines have been provided, no policies have been developed, and no regulations have been enacted to address the specific ethical issues raised by the use of Artificial Intelligence in Education.
This workshop, ETHICS in AIED: Who Cares?, is proposed as a first step towards addressing this critical problem for the field. It will be an opportunity for researchers who are exploring ethical issues critical for AIED to share their research, to identify the key ethical issues, and to map out how to address the multiple challenges, towards establishing a basis for meaningful ethical reflection necessary for innovation in the field of AIED.
The workshop will be in three parts. It will begin with ETHICS in AIED: What’s the problem?, a round-table discussion introduced and led by Professor Beverly Woolf, one of the world’s most accomplished AIED researchers. This will be followed by Mapping the Landscape, in which up to six AIED conference participants will each give a five-minute ‘lightning’ presentation on ethics in AIED research. The workshop will conclude with Addressing the Challenges, a round-table discussion session in which we will agree on a core list of ethical questions/areas of necessary research for the field of AIED, and will set out to identify next steps
- …