7 research outputs found
Certification Labels for Trustworthy AI: Insights From an Empirical Mixed-Method Study
Auditing plays a pivotal role in the development of trustworthy AI. However,
current research primarily focuses on creating auditable AI documentation,
which is intended for regulators and experts rather than end-users affected by
AI decisions. How to communicate to members of the public that an AI has been
audited and considered trustworthy remains an open challenge. This study
empirically investigated certification labels as a promising solution. Through
interviews (N = 12) and a census-representative survey (N = 302), we
investigated end-users' attitudes toward certification labels and their
effectiveness in communicating trustworthiness in low- and high-stakes AI
scenarios. Based on the survey results, we demonstrate that labels can
significantly increase end-users' trust and willingness to use AI in both low-
and high-stakes scenarios. However, end-users' preferences for certification
labels and their effect on trust and willingness to use AI were more pronounced
in high-stake scenarios. Qualitative content analysis of the interviews
revealed opportunities and limitations of certification labels, as well as
facilitators and inhibitors for the effective use of labels in the context of
AI. For example, while certification labels can mitigate data-related concerns
expressed by end-users (e.g., privacy and data protection), other concerns
(e.g., model performance) are more challenging to address. Our study provides
valuable insights and recommendations for designing and implementing
certification labels as a promising constituent within the trustworthy AI
ecosystem
Investigating Employees’ Concerns and Wishes Regarding Digital Stress Management Interventions With Value Sensitive Design: Mixed Methods Study
Background: Work stress places a heavy economic and disease burden on society. Recent technological advances include digital health interventions for helping employees prevent and manage their stress at work effectively. Although such digital solutions come with an array of ethical risks, especially if they involve biomedical big data, the incorporation of employees' values in their design and deployment has been widely overlooked.
Objective: To bridge this gap, we used the value sensitive design (VSD) framework to identify relevant values concerning a digital stress management intervention (dSMI) at the workplace, assess how users comprehend these values, and derive specific requirements for an ethics-informed design of dSMIs. VSD is a theoretically grounded framework that front-loads ethics by accounting for values throughout the design process of a technology.
Methods: We conducted a literature search to identify relevant values of dSMIs at the workplace. To understand how potential users comprehend these values and derive design requirements, we conducted a web-based study that contained closed and open questions with employees of a Swiss company, allowing both quantitative and qualitative analyses.
Results: The values health and well-being, privacy, autonomy, accountability, and identity were identified through our literature search. Statistical analysis of 170 responses from the web-based study revealed that the intention to use and perceived usefulness of a dSMI were moderate to high. Employees' moderate to high health and well-being concerns included worries that a dSMI would not be effective or would even amplify their stress levels. Privacy concerns were also rated on the higher end of the score range, whereas concerns regarding autonomy, accountability, and identity were rated lower. Moreover, a personalized dSMI with a monitoring system involving a machine learning-based analysis of data led to significantly higher privacy (P=.009) and accountability concerns (P=.04) than a dSMI without a monitoring system. In addition, integrability, user-friendliness, and digital independence emerged as novel values from the qualitative analysis of 85 text responses.
Conclusions: Although most surveyed employees were willing to use a dSMI at the workplace, there were considerable health and well-being concerns with regard to effectiveness and problem perpetuation. For a minority of employees who value digital independence, a nondigital offer might be more suitable. In terms of the type of dSMI, privacy and accountability concerns must be particularly well addressed if a machine learning-based monitoring component is included. To help mitigate these concerns, we propose specific requirements to support the VSD of a dSMI at the workplace. The results of this work and our research protocol will inform future research on VSD-based interventions and further advance the integration of ethics in digital health
Creative Uses of AI Systems and their Explanations: A Case Study from Insurance
Recent works have recognized the need for human-centered perspectives when
designing and evaluating human-AI interactions and explainable AI methods. Yet,
current approaches fall short at intercepting and managing unexpected user
behavior resulting from the interaction with AI systems and explainability
methods of different stake-holder groups. In this work, we explore the use of
AI and explainability methods in the insurance domain. In an qualitative case
study with participants with different roles and professional backgrounds, we
show that AI and explainability methods are used in creative ways in daily
workflows, resulting in a divergence between their intended and actual use.
Finally, we discuss some recommendations for the design of human-AI
interactions and explainable AI methods to manage the risks and harness the
potential of unexpected user behavior.Comment: Accepted at the ACM CHI 2022 Workshop on Human-Centered Explainable
AI (HCXAI
Investigating Employees' Concerns and Wishes Regarding Digital Stress Management Interventions With Value Sensitive Design: Mixed Methods Study
Background:
Work stress places a heavy economic and disease burden on society. Recent technological advances include digital health interventions for helping employees prevent and manage their stress at work effectively. Although such digital solutions come with an array of ethical risks, especially if they involve biomedical big data, the incorporation of employees’ values in their design and deployment has been widely overlooked.
Objective:
To bridge this gap, we used the value sensitive design (VSD) framework to identify relevant values concerning a digital stress management intervention (dSMI) at the workplace, assess how users comprehend these values, and derive specific requirements for an ethics-informed design of dSMIs. VSD is a theoretically grounded framework that front-loads ethics by accounting for values throughout the design process of a technology.
Methods:
We conducted a literature search to identify relevant values of dSMIs at the workplace. To understand how potential users comprehend these values and derive design requirements, we conducted a web-based study that contained closed and open questions with employees of a Swiss company, allowing both quantitative and qualitative analyses.
Results:
The values health and well-being, privacy, autonomy, accountability, and identity were identified through our literature search. Statistical analysis of 170 responses from the web-based study revealed that the intention to use and perceived usefulness of a dSMI were moderate to high. Employees’ moderate to high health and well-being concerns included worries that a dSMI would not be effective or would even amplify their stress levels. Privacy concerns were also rated on the higher end of the score range, whereas concerns regarding autonomy, accountability, and identity were rated lower. Moreover, a personalized dSMI with a monitoring system involving a machine learning-based analysis of data led to significantly higher privacy (P=.009) and accountability concerns (P=.04) than a dSMI without a monitoring system. In addition, integrability, user-friendliness, and digital independence emerged as novel values from the qualitative analysis of 85 text responses.
Conclusions:
Although most surveyed employees were willing to use a dSMI at the workplace, there were considerable health and well-being concerns with regard to effectiveness and problem perpetuation. For a minority of employees who value digital independence, a nondigital offer might be more suitable. In terms of the type of dSMI, privacy and accountability concerns must be particularly well addressed if a machine learning-based monitoring component is included. To help mitigate these concerns, we propose specific requirements to support the VSD of a dSMI at the workplace. The results of this work and our research protocol will inform future research on VSD-based interventions and further advance the integration of ethics in digital health.ISSN:1438-887
Investigating Employees’ Concerns and Wishes Regarding Digital Stress Management Interventions With Value Sensitive Design: Mixed Methods Study
BackgroundWork stress places a heavy economic and disease burden on society. Recent technological advances include digital health interventions for helping employees prevent and manage their stress at work effectively. Although such digital solutions come with an array of ethical risks, especially if they involve biomedical big data, the incorporation of employees’ values in their design and deployment has been widely overlooked.
ObjectiveTo bridge this gap, we used the value sensitive design (VSD) framework to identify relevant values concerning a digital stress management intervention (dSMI) at the workplace, assess how users comprehend these values, and derive specific requirements for an ethics-informed design of dSMIs. VSD is a theoretically grounded framework that front-loads ethics by accounting for values throughout the design process of a technology.
MethodsWe conducted a literature search to identify relevant values of dSMIs at the workplace. To understand how potential users comprehend these values and derive design requirements, we conducted a web-based study that contained closed and open questions with employees of a Swiss company, allowing both quantitative and qualitative analyses.
ResultsThe values health and well-being, privacy, autonomy, accountability, and identity were identified through our literature search. Statistical analysis of 170 responses from the web-based study revealed that the intention to use and perceived usefulness of a dSMI were moderate to high. Employees’ moderate to high health and well-being concerns included worries that a dSMI would not be effective or would even amplify their stress levels. Privacy concerns were also rated on the higher end of the score range, whereas concerns regarding autonomy, accountability, and identity were rated lower. Moreover, a personalized dSMI with a monitoring system involving a machine learning-based analysis of data led to significantly higher privacy (P=.009) and accountability concerns (P=.04) than a dSMI without a monitoring system. In addition, integrability, user-friendliness, and digital independence emerged as novel values from the qualitative analysis of 85 text responses.
ConclusionsAlthough most surveyed employees were willing to use a dSMI at the workplace, there were considerable health and well-being concerns with regard to effectiveness and problem perpetuation. For a minority of employees who value digital independence, a nondigital offer might be more suitable. In terms of the type of dSMI, privacy and accountability concerns must be particularly well addressed if a machine learning-based monitoring component is included. To help mitigate these concerns, we propose specific requirements to support the VSD of a dSMI at the workplace. The results of this work and our research protocol will inform future research on VSD-based interventions and further advance the integration of ethics in digital health