Designing trust: Evolving models and frameworks towards prospective design futures in highly automated systems

Abstract

This Ph.D explores how trust can be designed in the context of highly automated systems (HASs). The case is made that HASs are not simply representations of logical and rational systems with a limited set of pre-programmed supervised tasks on behalf of the user. These systems are largely unsupervised and have the ability to learn and change over time. They can dynamically set their own goals, have the ability to adapt to local conditions via external information (sensors/input) and can potentially evolve in unexpected ways. Such characteristics are crucial for drawing informed conclusions from HASs, and can be addressed through appropriately designed tools and frameworks. Using this process, this study enables knowledge to apply ethical directionalities to the design of highly automated digital systems. In this process, I discuss that there is a need to develop new ethical frameworks in design to address the main requirements for design in the exponential digital technological age in which we live: preparedness, readiness, and appropriateness. This thesis is interested in applied ethics in large part because we are concerned, even obsessed, with the question of whom we can trust in a world where risk and uncertainty exist. In this context, trust plays a fundamental role as a mechanism to deal with uncertainty and risk. Trust formation is a dynamic process, starting before the user’s first contact with the system and continuing long thereafter. In this context, understanding how contexts, actions, and the unintended consequences that derive from them affect trust is fundamental for the effective design of HASs. In this thesis, the author proposes Prospective Design (PrD) as a future-led mixed methodology to mitigate unintended consequences in the context of HASs. This framework combines systems analysis with extrapolations and constructivist perspectives to reconcile confronted models of designing futures. It does so by exploring the context of the future development of virtual assistants (VAs). Although VAs are still in their infancy, they are expected to dominate digital interactions between humans and systems in the coming years. Investigating the prospective developments of this type of interaction device reveals the particular challenges of highly automated interactions for scholarly research. In this context, the intersection between the key issues of automation and accountability acts as a focal point. Departing from authored multi-dimensional strategies and modes of calculation in ethical computing that address the raising concerns and impact of HASs in society, this research examines how design decisions affect interactions between humans and systems, how these decisions may be made accessible to practitioners in design frameworks and how Prospective Design strategies are better suited to addressing the emerging concerns regarding these systems. This thesis contributes a new understanding of the ethical implications of designing HASs and provides the practical and conceptual means for making this knowledge accessible and usable to designers

    Similar works