50,748 research outputs found

    Generating Rembrandt: Artificial Intelligence, Copyright, and Accountability in the 3A Era--The Human-like Authors are Already Here- A New Model

    Get PDF
    Artificial intelligence (AI) systems are creative, unpredictable, independent, autonomous, rational, evolving, capable of data collection, communicative, efficient, accurate, and have free choice among alternatives. Similar to humans, AI systems can autonomously create and generate creative works. The use of AI systems in the production of works, either for personal or manufacturing purposes, has become common in the 3A era of automated, autonomous, and advanced technology. Despite this progress, there is a deep and common concern in modern society that AI technology will become uncontrollable. There is therefore a call for social and legal tools for controlling AI systems’ functions and outcomes. This Article addresses the questions of the copyrightability of artworks generated by AI systems: ownership and accountability. The Article debates who should enjoy the benefits of copyright protection and who should be responsible for the infringement of rights and damages caused by AI systems that independently produce creative works. Subsequently, this Article presents the AI Multi- Player paradigm, arguing against the imposition of these rights and responsibilities on the AI systems themselves or on the different stakeholders, mainly the programmers who develop such systems. Most importantly, this Article proposes the adoption of a new model of accountability for works generated by AI systems: the AI Work Made for Hire (WMFH) model, which views the AI system as a creative employee or independent contractor of the user. Under this proposed model, ownership, control, and responsibility would be imposed on the humans or legal entities that use AI systems and enjoy its benefits. This model accurately reflects the human-like features of AI systems; it is justified by the theories behind copyright protection; and it serves as a practical solution to assuage the fears behind AI systems. In addition, this model unveils the powers behind the operation of AI systems; hence, it efficiently imposes accountability on clearly identifiable persons or legal entities. Since AI systems are copyrightable algorithms, this Article reflects on the accountability for AI systems in other legal regimes, such as tort or criminal law and in various industries using these systems

    Accountability in Managing Artificial Intelligence: State of the Art and a way forward for Information Systems Research

    Get PDF
    Establishing accountability for Artificial Intelligence (AI) systems is challenging due to the distribution of responsibilities among multiple actors involved in their development, deployment, and use. Nonetheless, AI accountability is crucial. As AI can affect all aspects of private and professional life, the actors involved in AI lifecycles need to take responsibility for their decisions and actions, be ready to respond to interrogations by those affected by AI and held liable when AI works in unacceptable ways. Despite the significance of AI accountability, the Information Systems research community has not engaged much with the topic and lacks a systematic understanding of existing approaches to it. This paper present the results of a comprehensive conceptual literature review that synthetizes current knowledge on AI accountability. The paper contributes to the IS literature by providing (i) conceptual clarification mapping different accountability conceptualizations; (ii) a comprehensive framework for AI accountability challenges and actionable responses at three different levels: system, process, data and; (iii) a framing of AI accountability as a a socio-technical and organizational problem that IS researchers are well-equipped to study highlighting the need to balance instrumental and humanistic outcomes

    Accountability-Based User Interface Design Artifacts and Their Implications for User Acceptance of AI-Enabled Services

    Get PDF
    Although AI-enabled interactive decision aids (IDAs) have demonstrated to provide reliable advice, users are rather reluctant to follow this advice. One recently highly discussed reason for this reluctance is users’ perceived unclear accountability of the AI-service regarding the decisions of these AI-based IDAs. Drawing on accountability theory, we designed user-interface (UI) design artifacts for AI-enabled IDAs based on the dimensions identifiability, expectation of evaluation, awareness of monitoring, and social presence and tested them through a scenario-based factorial survey method (N = 629). We show that accountability-emphasizing UI design artifacts individually raise users’ accountability perceptions of the AI-enabled service, which in turn influence users’ compliance to follow the advice from the AI-enabled service. These findings have important theoretical and practical implications, particularly as they inform how to increase the transparency of accountability of AI-enabled services and thus user compliance

    How AI Developers’ Perceived Accountability Shapes Their AI Design Decisions

    Get PDF
    While designing artificial intelligence (AI)-based systems, AI developers usually have to justify their design decisions and, thus, are accountable for their actions and how they design AI-based systems. Crucial facets of AI (i.e., autonomy, inscrutability, and learning) notably cause potential accountability issues that AI developers must consider in their design decisions, which has received little attention in prior literature. Drawing on self-determination theory and accountability literature, we conducted a scenario-based survey (n=132). We show that AI developers who perceive themselves as accountable tend to design AI-based systems to be less autonomous and inscrutable but more capable of learning when deployed. Our mediation analyses suggest that perceived job autonomy can partially explain these direct effects. Therefore, AI design decisions depend on individual and organizational settings and must be considered from different perspectives. Thus, we contribute to a better understanding of the effects of AI developers’ perceived accountability when designing AI-based systems

    Accountable, Explainable Artificial Intelligence Incorporation Framework for a Real-Time Affective State Assessment Module

    Get PDF
    The rapid growth of artificial intelligence (AI) and machine learning (ML) solutions has seen it adopted across various industries. However, the concern of ‘black-box’ approaches has led to an increase in the demand for high accuracy, transparency, accountability, and explainability in AI/ML approaches. This work contributes through an accountable, explainable AI (AXAI) framework for delineating and assessing AI systems. This framework has been incorporated into the development of a real-time, multimodal affective state assessment system
    • …
    corecore