27 research outputs found
How AI Systems Challenge the Conditions of Moral Agency?
The article explores the effects increasing automation has on our conceptions of human agency. We conceptualize the central features of human agency as ableness, intentionality, and rationality and define responsibility as a central feature of moral agency. We discuss suggestions in favor of holding AI systems moral agents for their functions but join those who refute this view. We consider the possibility of assigning moral agency to automated AI systems in settings of machine-human cooperation but come to the conclusion that AI systems are not genuine participants in joint action and cannot be held morally responsible. Philosophical issues notwithstanding, the functions of AI systems change human agency as they affect our goal setting and pursuing by influencing our conceptions of the attainable. Recommendation algorithms on news sites, social media platforms, and in search engines modify our possibilities to receive accurate and comprehensive information, hence influencing our decision making. Sophisticated AI systems replace human workforce even in such demanding fields as medical surgery, language translation, visual arts, and composing music. Being second to a machine in an increasing number of fields of expertise will affect how human beings regard their own abilities. We need a deeper understanding of how technological progress takes place and how it is intertwined with economic and political realities. Moral responsibility remains a human characteristic. It is our duty to develop AI to serve morally good ends and purposes. Protecting and strengthening the conditions of human agency in any AI environment is part of this task.Peer reviewe