8 research outputs found

    Data, Power and Bias in Artificial Intelligence

    Full text link
    Artificial Intelligence has the potential to exacerbate societal bias and set back decades of advances in equal rights and civil liberty. Data used to train machine learning algorithms may capture social injustices, inequality or discriminatory attitudes that may be learned and perpetuated in society. Attempts to address this issue are rapidly emerging from different perspectives involving technical solutions, social justice and data governance measures. While each of these approaches are essential to the development of a comprehensive solution, often discourse associated with each seems disparate. This paper reviews ongoing work to ensure data justice, fairness and bias mitigation in AI systems from different domains exploring the interrelated dynamics of each and examining whether the inevitability of bias in AI training data may in fact be used for social good. We highlight the complexity associated with defining policies for dealing with bias. We also consider technical challenges in addressing issues of societal bias.Comment: 3 page

    A Survey on Ethical Principles of AI and Implementations

    Full text link
    © 2020 IEEE. AI has powerful capabilities in prediction, automation, planning, targeting, and personalisation. Generally, it is assumed that AI can enable machines to exhibit human-like intelligence, and is claimed to benefit to different areas of our lives. Since AI is fueled by data and is a distinct form of autonomous and self-learning agency, we are seeing increasing ethical concerns related to AI uses. In order to mitigate various ethical concerns, national and international organisations including governmental organisations, private sectors as well as research institutes have made extensive efforts by drafting ethical principles of AI, and having active discussions on ethics of AI within and beyond the AI community. This paper investigates these efforts with a focus on the identification of fundamental ethical principles of AI and their implementations. The review found that there is a convergence around limited principles and the most prevalent principles are transparency, justice and fairness, responsibility, non-maleficence, and privacy. The investigation suggests that ethical principles need to be combined with every stages of the AI lifecycle in the implementation to ensure that the AI system is designed, implemented and deployed in an ethical manner. Similar to ethical framework used in biomedical and clinical research, this paper suggests checklist-style questionnaires as benchmarks for the implementation of ethical principles of AI

    Ethics and Morality in AI - A Systematic Literature Review and Future Research

    Get PDF
    Artificial intelligence (AI) has become an integral part of our daily lives in recent years. At the same time, the topic of ethics and morality in the context of AI has been discussed in both practical and scientific discourse. Either it deals with ethical concerns, concrete application areas, the programming of AI or its moral status. However, no article can be found that provides an overview of the combination of ethics, morality and AI and systematizes them. Thus, this paper provides a systematic literature review on ethics and morality in the context of AI examining the scientific literature between the years 2017 and 2021. The search resulted in 1,641 articles across five databases of which 224 articles were included in the evaluation. Literature was systematized into seven topics presented in this paper. Implications of this review can be valuable not only for academia, but also for practitioners

    An Ontology for Standardising Trustworthy AI

    Get PDF
    Worldwide, there are a multiplicity of parallel activities being undertaken in developing international standards, regulations and individual organisational policies related to AI and its trustworthiness characteristics. The current lack of mappings between these activities presents the danger of a highly fragmented global landscape emerging in AI trustworthiness. This could present society, government and industry with competing standards, regulations and organisational practices that will then serve to undermine rather than build trust in AI. This chapter presents a simple ontology that can be used for checking the consistency and overlap of concepts from different standards, regulations and policies. The concepts in this ontology are grounded in an overview of AI standardisation currently being undertaken in ISO/IEC JTC 1/SC 42 and identifies its project to define an AI management system standard (AIMS or ISO/IEC WD 42001) as the starting point for establishing conceptual mapping between different initiatives. We propose a minimal, high level ontology for the support of conceptual mapping between different documents and show in the first instance how this can help map out the overlaps and gaps between and among SC 42 standards currently under development

    Operationalizing the Ethics of Connected and Automated Vehicles. An Engineering Perspective

    Get PDF
    In response to the many social impacts of automated mobility, in September 2020 the European Commission published Ethics of Connected and Automated Vehicles, a report in which recommendations on road safety, privacy, fairness, explainability, and responsibility are drawn from a set of eight overarching principles. This paper presents the results of an interdisciplinary research where philosophers and engineers joined efforts to operationalize the guidelines advanced in the report. To this aim, we endorse a function-based working approach to support the implementation of values and recommendations into the design of automated vehicle technologies. Based on this, we develop methodological tools to tackle issues related to personal autonomy, explainability, and privacy as domains that most urgently require fine-grained guidance due to the associated ethical risks. Even though each tool still requires further inquiry, we believe that our work might already prove the productivity of the function-based approach and foster its adoption in the CAV scientific community

    Coordinated Control Design for Ethical Maneuvering of Autonomous Vehicles

    Get PDF
    This paper proposes a coordinated control design method, with which the autonomous vehicle is able to perform ethical maneuvers. The starting point of the provided method is a thorough analysis on the ethical concepts for autonomous vehicle control design methods. Using the results of the analysis, an own concept is provided based on some principles of Protestant ethics. The concept focuses on improving trust in vehicle control through clear rules and predictable vehicle motion, and it is in line with the state-of-the-art ethical vehicle control methods. Moreover, an optimal Model Predictive Control (MPC) design method is formed, in which the provided ethical concept is incorporated. The outputs of the optimal control are steering angle and velocity profile, with which the ethical maneuvering can be achieved. The contribution of the paper is a coordinated control design method, which is able to involve ethical principles. Moreover, the application of Protestant ethics in this context is also a novel achievement in the paper. The effectiveness of the method through different simulation scenarios is illustrated

    Designing a Value-Driven Future for Ethical Autonomous and Intelligent Systems

    No full text
    corecore