10 research outputs found
New Frontiers in Explainable AI: Understanding the GI to Interpret the GO
International audienceIn this paper we focus on the importance of interpreting the quality of the input of predictive models (potentially a GI, i.e., a Garbage In) to make sense of the reliability of their output (potentially a GO, a Garbage Out) in support of human decision making, especially in critical domains, like medicine. To this aim, we propose a framework where we distinguish between the Gold Standard (or Ground Truth) and the set of annotations from which this is derived, and a set of quality dimensions that help to assess and interpret the AI advice: fineness, trueness, representativeness, conformity, dryness. We then discuss implications for obtaining more informative training sets and for the design of more usable Decision Support Systems
When Persuasive Technology Gets Dark?
Influencing systems and persuasive technology (PT) should give their users a positive experience. While that sounds attractive and many rush implementing novel ideas things such as gamification, a serious professional and scientifically rich discussion is needed to portray a holistic picture on technology influence. Relatively little research has been aimed at exploring the negative aspects, outcomes, and side effects of PT. Therefore, this research aims at addressing this gap by reviewing the existing knowledge on dark patterns, demonstrating how intended Pt designs can be critically examined, introducing the Visibility-Darkness matrix to categorize and locate dark patterns, and proposing a Framework for Evaluating the Darkness of Persuasive Technology (FEDPT). The framework is instrumental for designers and developers of influential technology, as it clarifies an area where their products and services can have a negative impact on well-being, in other words, can become harmful to the users
