4 research outputs found

    Implementation of Morality in Artificial Intelligence

    Get PDF
    The spark of artificial intelligence is often attributed to Alan Turing, who was one of the first people to explore this, then foreign, concept of AI. However, his studies were restricted by the unavailability of higher powered computing technology. With the continuous development of powerful technology in recent years, significant advancements have been made in the field of artificial intelligence. These advancements have reached the point where peopleā€™s lives are put at the hands of AI in certain situations. This brings in many complications regarding the morality of artificial intelligence technology. One route that scientists are taking involves using a database of humansā€™ ethical decisions to provide a foundation for AI decision making. My objective is to increase the depth of this database by traveling up the coast of California and collecting data on different peopleā€™s responses to various ethical dilemmas. This interview process will occur between the dates of May 15, 2019 and June 3, 2019. After I gather this information, I plan to look for trends in responses. I will use these trends to write an article about common perspectives in ethics and how they can be applied to artificial intelligence in society

    Implementing Asimovā€™s First Law of Robotics

    Get PDF
    The need to make sure autonomous systems behave ethically is increasing with these systems becoming part of our society. Although there is no consensus to which actions an autonomous system should always be ethically obliged, preventing harm to people is an intuitive first candidate for a principle of behaviour. Do not hurt a human or allow a human to be hurt by your inaction is Asimov's First Law of robotics. We consider the challenges that the implementation of this Law will incur. To unearth these challenges we constructed a simulation of a First Robot Law abiding agent and an accident prone Human. We used a classic two-dimensional grid environment and explored to which extent an agent can be programmed, using standard artificial intelligence methods, to prevent a human from making dangerous actions. We outline the drawbacks of using the Asimov's First Law of robotics as an underlying ethical theory the governs an autonomous system's behaviour

    Engineering Moral Agents -- from Human Morality to Artificial Morality (Dagstuhl Seminar 16222)

    No full text
    This report documents the programme of, and outcomes from, the Dagstuhl Seminar 16222 on "Engineering Moral Agents -- from Human Morality to Artificial Morality". Artificial morality is an emerging area of research within artificial intelligence (AI), concerned with the problem of designing artificial agents that behave as moral agents, i.e. adhere to moral, legal, and social norms. Context-aware, autonomous, and intelligent systems are becoming a presence in our society and are increasingly involved in making decisions that affect our lives. While humanity has developed formal legal and informal moral and social norms to govern its own social interactions, there are no similar regulatory structures that apply to non-human agents. The seminar focused on questions of how to formalise, "quantify", qualify, validate, verify, and modify the ``ethics" of moral machines. Key issues included the following: How to build regulatory structures that address (un)ethical machine behaviour? What are the wider societal, legal, and economic implications of introducing AI machines into our society? How to develop "computational" ethics and what are the difficult challenges that need to be addressed? When organising this workshop, we aimed to bring together communities of researchers from moral philosophy and from artificial intelligence most concerned with this topic. This is a long-term endeavour, but the seminar was successful in laying the foundations and connections for accomplishing it

    Rise of technomoral virtues for artificial intelligence-based emerging technologiesā€™ users and producers : threats to personal information privacy, the privacy paradox, trust in emerging technologies, and virtue ethics.

    Get PDF
    In terms of communication and internationalization, global openness is a defining characteristic of this era due to the revolution in advanced information and communication technologies (ICTs). While these emerging technologies (ETs) play significant roles in the maturity of information societies, they also pose severe threats to societyā€™s values, such as personal information privacy (PIP). The impact of advanced ICTs is challenging to estimate due to their ubiquity and omnipresence. This situation will worsen because of disruptive emerging ICTs, such as artificial intelligence (AI), the Internet of things (IoTs), and big data-based applications heavily dependent on usersā€™ real-time personal information. Prior research suggests that despite concerns over the collection of personal information by these technologies, the increasing trends of data breaches, and the distribution of usersā€™ personal dynamic information, individualsā€™ usage of modern technologies has paradoxically increased. Accordingly, this study explores how individuals will develop trust in AI-based ETs in the presence of PIP threats and the root causes behind their paradoxical thinking regarding their privacy. This studyā€™s literature review reveals that trust in ETs and the privacy paradox are subjective phenomena that are not merely outcomes of cost-benefit analysis. Individualsā€™ moral values (i.e., virtues) and experiences are essential in developing trust and explaining peopleā€™s paradoxical behavior. This study designed a research framework based on virtue ethics, the concourse theory, and Q-methodology to understand these phenomena and their associated subjectivity by following abductive logic in the constructionist paradigm. The data analysis reveals five moral virtue structures (MVSs) predominant among this studyā€™s participants, related to their development of trust in ETs in the presence of PIP threats. Based on these MVSs, this study identified five types of users. These MVSs show individualsā€™ belief that the virtues of hopefulness, altruism, commitment, hospitality, humor, tolerance, resourcefulness, dignity, boldness, loyalty, trustworthiness, warmth, thrift, magnanimity, thoughtfulness, harmony, cooperativeness, openness, and perspicacity are more important for developing trust in ETs in the presence of PIP threats. The findings also show that in addition to individualsā€™ blind enthusiasm for technologies and beliefs in their trustworthiness, theirā€™ socialistic views also play a critical role in the development of trust in ETs. This study establishes a clear link between individualsā€™ temptation for technologies and moral character weakness, which in turn causes the privacy paradox. Individualsā€™ temptations for new technologies have become a distraction, preventing them from thinking about technology usage risks. This study concludes that individualsā€™ MVSs have been corrupted due to their temptations for new technologies. Due to this, they cannot determine their PIP value. By applying Aristotelian virtuous person logic, this study finds that individuals lack critical virtues (i.e., cheerfulness, prudence, self- discipline, and insightfulness) for developing trust in ETs in the presence of PIP threats. This study contributes to the body of knowledge in multiple ways. It explains the reasons behind the privacy paradox and individualsā€™ sacrifice on their PIP and trust development in ETs through the virtue ethics perspective. This study also proposes a research framework for moral philosophers and psychologists to study individualsā€™ virtues and their mindā€™s transient states (i.e., transient subjectivity) from within the subjective paradigm. In closing, this study explains how the findings are relevant to policymakers and offer them privacy protection and regulation opportunities
    corecore