69,785 research outputs found

    Mitigating Bias in Organizational Development and Use of Artificial Intelligence

    Get PDF
    We theorize why some artificial intelligence (AI) algorithms unexpectedly treat protected classes unfairly. We hypothesize that mechanisms by which AI assumes agencies, rights, and responsibilities of its stakeholders can affect AI bias by increasing complexity and irreducible uncertainties: e.g., AI’s learning method, anthropomorphism level, stakeholder utility optimization approach, and acquisition mode (make, buy, collaborate). In a sample of 726 agentic AI, we find that unsupervised and hybrid learning methods increase the likelihood of AI bias, whereas “strict” supervised learning reduces it. Highly anthropomorphic AI increases the likelihood of AI bias. Using AI to optimize one stakeholder’s utility increases AI bias risk, whereas jointly optimizing the utilities of multiple stakeholders reduces it. User organizations that co-create AI with developer organizations instead of developing it in-house or acquiring it off-the-shelf reduce AI bias risk. The proposed theory and the findings advance our understanding of responsible development and use of agentic AI

    Roadmap to competitive and socially responsible artificial intelligence

    Get PDF
    The roadmap to competitive and socially responsible artificial intelligence (AI) offers an overview of AI governance drivers and tasks. It is intended for organizations using or planning to use information systems that include AI functionalities, such as machine learning, natural language processing, and computer vision. Responsible AI is still an emerging topic, but legal and stakeholder requirements for AI systems to comply with societally agreed standards are growing. In particular, the European Union’s proposed Artificial Intelligence Act is set to introduce new rules for AI systems used in high-risk application domains. However, beyond binding legislation, soft governance, such as guidelines and ethics principles, already seeks to differentiate between socially responsible and irresponsible AI development and use practices. The roadmap report begins by laying out its target group, instructions, and structure and then moves on to definitions. Next, we introduce the institutionalization of AI as a necessary background to the consideration of AI governance. The main roadmap section includes a visual representation and explanation of the six key drivers of competitive and socially responsible AI: 1) Movement from AI ethics principles to AI governance 2) Responsible AI commercialization potential and challenges 3) AI standardization 4) Automation of AI governance 5) Responsible AI business ecosystems 6) Stakeholder pressure for responsible AI The roadmap is followed by a future research agenda highlighting five emerging research areas: 1) operational governance mechanisms for complex AI systems, 2) connections to corporate sustainability, 3) automation of AI governance, 4) future of responsible AI ecosystems, and 5) sociotechnical activities to implement responsible AI. Researchers and research funding bodies play a key role in advancing competitive and socially responsible AI by deepening these knowledge areas. Advancing socially responsible AI is important because the benefits of AI technologies can be reaped only if organizations and individuals can trust the technologies to operate fairly, transparently, and according to socially defined rules. This roadmap was developed by the Artificial Intelligence Governance and Auditing (AIGA) co-innovation project funded by Business Finland during the years 2020 to 2022. The roadmap was cocreated by researchers, company practitioners, and other AIGA project stakeholders

    Generating Rembrandt: Artificial Intelligence, Copyright, and Accountability in the 3A Era--The Human-like Authors are Already Here- A New Model

    Get PDF
    Artificial intelligence (AI) systems are creative, unpredictable, independent, autonomous, rational, evolving, capable of data collection, communicative, efficient, accurate, and have free choice among alternatives. Similar to humans, AI systems can autonomously create and generate creative works. The use of AI systems in the production of works, either for personal or manufacturing purposes, has become common in the 3A era of automated, autonomous, and advanced technology. Despite this progress, there is a deep and common concern in modern society that AI technology will become uncontrollable. There is therefore a call for social and legal tools for controlling AI systems’ functions and outcomes. This Article addresses the questions of the copyrightability of artworks generated by AI systems: ownership and accountability. The Article debates who should enjoy the benefits of copyright protection and who should be responsible for the infringement of rights and damages caused by AI systems that independently produce creative works. Subsequently, this Article presents the AI Multi- Player paradigm, arguing against the imposition of these rights and responsibilities on the AI systems themselves or on the different stakeholders, mainly the programmers who develop such systems. Most importantly, this Article proposes the adoption of a new model of accountability for works generated by AI systems: the AI Work Made for Hire (WMFH) model, which views the AI system as a creative employee or independent contractor of the user. Under this proposed model, ownership, control, and responsibility would be imposed on the humans or legal entities that use AI systems and enjoy its benefits. This model accurately reflects the human-like features of AI systems; it is justified by the theories behind copyright protection; and it serves as a practical solution to assuage the fears behind AI systems. In addition, this model unveils the powers behind the operation of AI systems; hence, it efficiently imposes accountability on clearly identifiable persons or legal entities. Since AI systems are copyrightable algorithms, this Article reflects on the accountability for AI systems in other legal regimes, such as tort or criminal law and in various industries using these systems

    Across the Great Digital Divide: Investigating the Impact of AI on Rural SMEs

    Get PDF
    Rural SMEs are generally at a digital disadvantage due to their size and location. The addition of AI to many business processes has the potential to minimize the existing divide. However, without access to this technology and its responsible usage, Rural SMEs could be placed at a more significant disadvantage. To understand the current situation we conducted interviews with Rural SMEs and related stakeholders. This paper draws on Activity Theory to develop a holistic understanding of the influence AI is having on the business processes of rural SMEs. We also consider the role of AI in terms of the existing digital divide frameworks, as well as the newly proposed fourth wave that captures the novel forms of disadvantage AI can perpetuate

    Progressing Towards Responsible AI

    Get PDF
    The field of Artificial Intelligence (AI) and, in particular, the Machine Learning area, counts on a wide range of performance metrics and benchmark data sets to assess the problem-solving effectiveness of its solutions. However, the appearance of research centres, projects or institutions addressing AI solutions from a multidisciplinary and multi-stakeholder perspective suggests a new approach to assessment comprising ethical guidelines, reports or tools and frameworks to help both academia and business to move towards a responsible conceptualisation of AI. They all highlight the relevance of three key aspects: (i) enhancing cooperation among the different stakeholders involved in the design, deployment and use of AI; (ii) promoting multidisciplinary dialogue, including different domains of expertise in this process; and (iii) fostering public engagement to maximise a trusted relation with new technologies and practitioners. In this paper, we introduce the Observatory on Society and Artificial Intelligence (OSAI), an initiative grew out of the project AI4EU aimed at stimulating reflection on a broad spectrum of issues of AI (ethical, legal, social, economic and cultural). In particular, we describe our work in progress around OSAI and suggest how this and similar initiatives can promote a wider appraisal of progress in AI. This will give us the opportunity to present our vision and our modus operandi to enhance the implementation of these three fundamental dimensions

    Beneficial Artificial Intelligence Coordination by means of a Value Sensitive Design Approach

    Get PDF
    This paper argues that the Value Sensitive Design (VSD) methodology provides a principled approach to embedding common values in to AI systems both early and throughout the design process. To do so, it draws on an important case study: the evidence and final report of the UK Select Committee on Artificial Intelligence. This empirical investigation shows that the different and often disparate stakeholder groups that are implicated in AI design and use share some common values that can be used to further strengthen design coordination efforts. VSD is shown to be both able to distill these common values as well as provide a framework for stakeholder coordination

    Ethical decision making

    Get PDF
    The self-centeredness of modern organizations leads to environmental destruction and human deprivation. The principle of responsibility developed by Hans Jonas requires caring for the beings affected by our decisions and actions. Ethical decision-making creates a synthesis of reverence for ethical norms, rationality in goal achievement, and respect for the stakeholders. The maximin rule selects the "least worst alternative" in the multidimensional decision space of deontological, goal-achievement and stakeholder values. The ethical decision-maker can be characterized as having the ability to take multiple perspectives and make appropriate balance across diverse value dimensions. Modern organizations should develop a critical sensitivity to and empathy toward human and non-human beings with which they share a common environment
    • …
    corecore