3 research outputs found

    Social and Governance Implications of Improved Data Efficiency

    Full text link
    Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed? This paper explores the social-economic impact of increased data efficiency. Specifically, we examine the intuition that data efficiency will erode the barriers to entry protecting incumbent data-rich AI firms, exposing them to more competition from data-poor firms. We find that this intuition is only partially correct: data efficiency makes it easier to create ML applications, but large AI firms may have more to gain from higher performing AI systems. Further, we find that the effect on privacy, data markets, robustness, and misuse are complex. For example, while it seems intuitive that misuse risk would increase along with data efficiency -- as more actors gain access to any level of capability -- the net effect crucially depends on how much defensive measures are improved. More investigation into data efficiency, as well as research into the "AI production function", will be key to understanding the development of the AI industry and its societal impacts.Comment: 7 pages, 2 figures, accepted to Artificial Intelligence Ethics and Society 202

    Can AI Achieve Common Good and Well-being? Implementing the NSTC's R&D Guidelines with a Human-Centered Ethical Approach

    Get PDF
    This paper delves into the significance and challenges of Artificial Intelligence (AI) ethics and justice in terms of Common Good and Well-being, fairness and non-discrimination, rational public deliberation, and autonomy and control. Initially, the paper establishes the groundwork for subsequent discussions using the Academia Sinica LLM incident and the AI Technology R&D Guidelines of the National Science and Technology Council(NSTC) as a starting point. In terms of justice and ethics in AI, this research investigates whether AI can fulfill human common interests and welfare. Taking AI injustice as an example, I analyze the practical assessment of AI regarding regional, industrial, and social impacts. Further, this paper discusses the challenges of fairness and non-discrimination in AI, specifically addressing the issue of training on biased data, discussing the acquisition of bias by AI and post-processing supervision issues, and emphasizing the importance of rational public deliberation in this process. Then, this research examines the challenges and countermeasures the rational public faces in public deliberation, such as the importance of education in STEM scientific literacy and technological capabilities. Finally, in discussing AI and autonomy, I propose a 'Human-Centered Approach’ rather than relying solely on the 'Technological Utility Maximization' brought by AI to achieve substantial AI justice. Keywords: AI Ethics and Justice, Fairness and Non-Discrimination, Biased Data Training, Public Deliberation, Autonomy, Human-Centered Approac

    The Contribution of Ethical Governance of Artificial Intelligence & Machine Learning in Healthcare

    Get PDF
    With the Internet Age and technology progressively advancing every year, the usage of Artificial Intelligence (AI) along with Machine Learning (ML) algorithms has only increased since its introduction to society. Specifically, in the healthcare field, AI/ML has proven to its end-users how beneficial its assistance has been. However, despite its effectiveness and efficiencies, AI/ML has also been under scrutiny due to its unethical outcomes. As a result of this, two polarizing views are typically debated when discussing AI/ML. One side believes that AI/ML usage should continue regardless of its unsureness, while the other side argues that this technology is too dangerous and should not be utilized at all. Given the fact that AI/ML can provide prompt and fairly accurate results, it is unrealistic to assume that AI/ML usage will end any time soon. Therefore, governance of AI/ML is needed to ensure that these technologies are reliable. Notably, AI governance has been positively reviewed and pushed for by scholars in the field. While AI governance does guarantee a sense of oversight on AI/ML, this form of governance is not sustainable. AI governance primarily focuses on the safety of the technology, with ethical, legal, and social factors serving as elements of AI governance. The safety of AI/ML is only one of the considerations for producing and ensuring ethical AI/ML. Ethical governance of AI/ML, which concentrates on incorporating ethics into all aspects of AI/ML—specifically, narrowing in on the stakeholders involved, will lead to not only a safer product but a more viable one as well. Thus, ethical governance of AI/ML must be advocated for in order to bring more awareness, which would lead to greater research and implementation of this type of governance. Although AI/ML can be used for a multitude of areas, the healthcare industry is slightly more significant, especially since these technologies directly affect the patients’ health. This dissertation explores the contribution of ethical governance of AI/ML in several facets of healthcare. As AI/ML requires big data to provide outcomes, the context of data analytics is discussed. Other areas the dissertation explores are clinical decision-making, end-of-life decisions, and biotechnology. While these topics certainly do not cover the whole healthcare field, the dissertation attempts to include a wide range of AI/ML functions from the beginning of its process (with data analytics) to the future of AI/ML (with biotechnology). With each of these areas of interest, various ethical governance principles are introduced and endorsed for to develop ethical AI/ML. The goal of this dissertation in discussing the contribution of ethical governance of AI/ML in healthcare is to provide a foundational groundwork for more future research of the ethical governance of AI/ML
    corecore