46,830 research outputs found

    Regulating ChatGPT and other Large Generative AI Models

    Full text link
    Large generative AI models (LGAIMs), such as ChatGPT or Stable Diffusion, are rapidly transforming the way we communicate, illustrate, and create. However, AI regulation, in the EU and beyond, has primarily focused on conventional AI models, not LGAIMs. This paper will situate these new generative models in the current debate on trustworthy AI regulation, and ask how the law can be tailored to their capabilities. After laying technical foundations, the legal part of the paper proceeds in four steps, covering (1) direct regulation, (2) data protection, (3) content moderation, and (4) policy proposals. It suggests a novel terminology to capture the AI value chain in LGAIM settings by differentiating between LGAIM developers, deployers, professional and non-professional users, as well as recipients of LGAIM output. We tailor regulatory duties to these different actors along the value chain and suggest four strategies to ensure that LGAIMs are trustworthy and deployed for the benefit of society at large. Rules in the AI Act and other direct regulation must match the specificities of pre-trained models. In particular, regulation should focus on concrete high-risk applications, and not the pre-trained model itself, and should include (i) obligations regarding transparency and (ii) risk management. Non-discrimination provisions (iii) may, however, apply to LGAIM developers. Lastly, (iv) the core of the DSA content moderation rules should be expanded to cover LGAIMs. This includes notice and action mechanisms, and trusted flaggers. In all areas, regulators and lawmakers need to act fast to keep track with the dynamics of ChatGPT et al.Comment: under revie

    Measuring Trustworthiness of AI Systems: A Holistic Maturity Model

    Get PDF
    Artificial intelligence (AI) has an impact on business and society at large while posing challenges and risks. For AI adoption, trustworthiness is paramount, yet there appears to be a gap between theory and practice. Organizations need guidance in quantitatively assessing and improving the trustworthiness of AI systems. To address such challenges, maturity models have shown to be a valuable instrument. However, recent AI maturity models address trustworthiness only at the maturest level. As a response, we propose a model to integrate the concept of trustworthiness across the AI lifecycle management. In doing so, we follow Design Science Research to develop a holistic model highlighting the importance of trustworthiness throughout the AI adoption journey to realize the real value potential. This research-in-progress contributes to the emerging research on human-AI systems and managing AI. Our objective is to use the model for assessing, evaluating, and improving trustworthy AI on an organizational level

    The Importance of Distrust in AI

    Full text link
    In recent years the use of Artificial Intelligence (AI) has become increasingly prevalent in a growing number of fields. As AI systems are being adopted in more high-stakes areas such as medicine and finance, ensuring that they are trustworthy is of increasing importance. A concern that is prominently addressed by the development and application of explainability methods, which are purported to increase trust from its users and wider society. While an increase in trust may be desirable, an analysis of literature from different research fields shows that an exclusive focus on increasing trust may not be warranted. Something which is well exemplified by the recent development in AI chatbots, which while highly coherent tend to make up facts. In this contribution, we investigate the concepts of trust, trustworthiness, and user reliance. In order to foster appropriate reliance on AI we need to prevent both disuse of these systems as well as overtrust. From our analysis of research on interpersonal trust, trust in automation, and trust in (X)AI, we identify the potential merit of the distinction between trust and distrust (in AI). We propose that alongside trust a healthy amount of distrust is of additional value for mitigating disuse and overtrust. We argue that by considering and evaluating both trust and distrust, we can ensure that users can rely appropriately on trustworthy AI, which can both be useful as well as fallible.Comment: This preprint has not undergone peer review or any post-submission improvements or corrections. The version of records of this contribution is published in Explainable Artificial Intelligence First World Conference, xAI 2023, Lisbon, Portugal, July 26-28, 2023, Proceedings, Part III (CCIS, volume 1903) and is available at https://doi.org/10.1007/978-3-031-44070-

    On the path to the future:mapping the notion of transparency in the EU regulatory framework for AI

    Get PDF
    Transparency is the currency of trust. It offers clarity and certainty. This is essential when dealing with intelligent systems which are increasingly making impactful decisions. Such decisions need to be sufficiently explained. With the goal of establishing ‘trustworthy AI’, the European Commission has recently published a legislative proposal for AI. However, there are important gaps in this framework which have not yet been addressed. This article identifies these gaps through a systematic overview of transparency considerations therein. Since transparency is an important means to improve procedural rights, this article argues that the AI Act should contain clear transparency obligations to avoid asymmetries and enable the explainability of automated decisions to those affected by them. The transparency framework in the proposed AI Act leaves open a risk of abuse by companies because their interests do not encompass considerations of AI systems’ ultimate impact on individuals. However, the dangers of keeping transparency as a value without a legal force justify further reflection when regulating AI systems in a way that aims to safeguard opposing interests. To this end, this article proposes inclusive co-regulation instead of self-regulation so that impacted individuals as well as innovators will be empowered to use and trust AI systems

    User Perceptions of Algorithmic Decisions in the Personalized AI System: Perceptual Evaluation of Fairness, Accountability, Transparency, and Explainability

    Get PDF
    © 2020 Broadcast Education Association. With the growing presence of algorithms and their far-reaching effects, artificial intelligence (AI) will be mainstream trends any time soon. Despite this surging popularity, little is known about the processes through which people perceive and make a sense of trust through algorithmic characteristics in a personalized algorithm system. This study examines the extent to which trust can be linked to how perceptions of automated personalization by AI and the processes of such perceptions influence user heuristic and systematic processes. It examines how fair, accountable, transparent, and interpretable people perceive the use of algorithmic recommendations by digital platforms. When users perceive that the algorithm is fairer, more accountable, transparent, and explainable, they see it as more trustworthy and useful. This demonstrates that trust is of particular value to users and further implies the heuristic roles of algorithmic characteristics in terms of their underlying links to trust and subsequent attitudes toward algorithmic decisions. The processes offer a useful perspective on the conceptualization of AI experience and interaction. User cognitive processes identified provide solid foundations for algorithm design and development and a stronger basis for the design of sensemaking AI services

    FROM COMMERCIAL AGREEMENTS TO THE SOCIAL CONTRACT: HUMAN-CENTERED AI GUIDELINES FOR PUBLIC SERVICES

    Get PDF
    Human-centered Artificial Intelligence (HCAI) is a term frequently used in the discourse on how to guide the development and deployment of AI in responsible and trustworthy ways. Major technology actors including Microsoft, Apple and Google are fostering their own AI ecosystems, also providing HCAI guidelines, which operationalize theoretical concepts to inform the practice of AI development. Yet, their commonality seems to be an orientation to commercial contexts. This paper focuses on AI for public services and on the special relationship between governmental organizations and the public. Approaching human-AI interaction through the lens of social contract theory we identify amendments to improve the suitability of an existing HCAI framework for the public sector. Following the Action Design Research methodological approach, we worked with a public organization to apply, assess, and adapt the “Google PAIR guidelines”, a well-known framework for human-centered AI development. The guidelines informed the design of an interactive prototype for AI in public services and through this process we revealed gaps and potential enhancements. Specifically, we found that it’s important to a) articulate a clear value proposition by weighing the public good vs. the individual benefit, b) define boundaries for repurposing public data given the relationship between citizens and their government, c) accommodate user group diversity by considering the different levels of technical and administrative literacy of citizens. We aim to shift the perspective within human-AI interaction, acknowledging that exchanges are not always subject to commercial agreements but can also be based on the mechanisms of a social contract

    ARTIFICIAL INTELLIGENCE AND CULTURAL HERITAGE: DESIGN AND ASSESSMENT OF AN ETHICAL FRAMEWORK

    Get PDF
    The pioneering use of Artificial Intelligence (AI) in various fields and sectors, and the growing ethical debate about its application have led research centers, public and private institutions to establish ethical guidelines for a trustworthy implementation of these powerful algorithms. Despite the recognized definition of ethical principles for a responsible or trustworthy use of AI, there is a lack of a sector-specific perspective that highlights the ethical risks and opportunities for different areas of application, especially in the field of Cultural Heritage (CH). In fact, there is still a lack of formal frameworks that evaluate the algorithms’ adherence to the ethical standards set by the European Union for the use of AI in protecting CH and its inherent value. Because of this, it is necessary to investigate a different sectoral viewpoint to supplement the widely used horizontal approach. This paper represents a first attempt to design an ethical framework to embody AI in CH conservation practises to assess various risks arising from the use of AI in the field of CH. The contribution presents a synthesis of the different AI applications to improve the preservation process of CH. It explores and analyses in depth the ethical challenges and opportunities presented by the use of AI to improve CH preservation. In addition, the study aims to design an ethical framework of principles to assess the application of this ground-breaking technology at CH

    Data-Centric Distrust Quantification for Responsible AI: When Data-driven Outcomes Are Not Reliable

    Full text link
    At the same time that AI and machine learning are becoming central to human life, their potential harms become more vivid. In the presence of such drawbacks, a critical question one needs to address before using these data-driven technologies to make a decision is whether to trust their outcomes. Aligned with recent efforts on data-centric AI, this paper proposes a novel approach to address the trust question through the lens of data, by associating data sets with distrust quantification that specify their scope of use for predicting future query points. The distrust values raise warning signals when a prediction based on a dataset is questionable and are valuable alongside other techniques for trustworthy AI. We propose novel algorithms for computing the distrust values in the neighborhood of a query point efficiently and effectively. Learning the necessary components of the measures from the data itself, our sub-linear algorithms scale to very large and multi-dimensional settings. Besides demonstrating the efficiency of our algorithms, our extensive experiments reflect a consistent correlation between distrust values and model performance. This underscores the message that when the distrust value of a query point is high, the prediction outcome should be discarded or at least not considered for critical decisions
    • 

    corecore