1 research outputs found
Certification Labels for Trustworthy AI: Insights From an Empirical Mixed-Method Study
Auditing plays a pivotal role in the development of trustworthy AI. However,
current research primarily focuses on creating auditable AI documentation,
which is intended for regulators and experts rather than end-users affected by
AI decisions. How to communicate to members of the public that an AI has been
audited and considered trustworthy remains an open challenge. This study
empirically investigated certification labels as a promising solution. Through
interviews (N = 12) and a census-representative survey (N = 302), we
investigated end-users' attitudes toward certification labels and their
effectiveness in communicating trustworthiness in low- and high-stakes AI
scenarios. Based on the survey results, we demonstrate that labels can
significantly increase end-users' trust and willingness to use AI in both low-
and high-stakes scenarios. However, end-users' preferences for certification
labels and their effect on trust and willingness to use AI were more pronounced
in high-stake scenarios. Qualitative content analysis of the interviews
revealed opportunities and limitations of certification labels, as well as
facilitators and inhibitors for the effective use of labels in the context of
AI. For example, while certification labels can mitigate data-related concerns
expressed by end-users (e.g., privacy and data protection), other concerns
(e.g., model performance) are more challenging to address. Our study provides
valuable insights and recommendations for designing and implementing
certification labels as a promising constituent within the trustworthy AI
ecosystem