12 research outputs found

    Advancing Human-Centred Algorithm Design Through Reflective Practice

    Get PDF
    Autonomous vehicle (AV) algorithms tend to be designed with a techno-solutionism mindset, causing algorithms to fail in real-world applications. This can be attributed to algorithm developers’ lack of routines and knowledge to consider the environments and circumstances AVs are intended to partake in. This paper argues for shifting towards a more responsible human-centred algorithm design (HCAD). It addresses this by demonstrating the different reflective practice qualities obtained by engaging algorithm designers from four companies with ethnographic materials. The study shows that Design Ethnographic (DE) approach allowed developers to consider the value of AVs from sociotechnical perspectives and facilitated collaborative learning and debating about what problems truly need solving to bring societal value. This demonstrates how ethnographically infused HCAD helps expand algorithm developers’ opportunities to participate responsibly in value co-creation for society

    TAILORING CO-CREATION FOR RESPONSIBLE INNOVATION: A DESIGN ETHNOGRAPHIC APPROACH

    Get PDF
    It is hard to predict the impact of technology on society before it is developed enough. For example, the issue can be attributed to the need for more cross-sectoral collaboration in the design process. However, a solution for anticipating such outcomes has been proposed through the quadruple helix innovation model, which states that the involvement of government, academia, industry, and the public is essential in innovation systems. The question of how this collaboration can successfully be staged to foresee possible impacts is an empirical endeavour. This paper presents an iterative case study of how ethnographic material can be used to ongoingly tailor speculative co-creation to facilitate responsible innovation (RI) principles. The result is reflected through two lenses; the tools developed in the project to facilitate co-creation activities and the stakeholder reflections evoked through these tools

    Towards trustworthy intelligent vehicle technology development

    Full text link
    This thesis addresses the unresolved issues of responsibility and accountability in autonomous vehicle (AV) development, advocating for human-centred approaches to enhance trustworthiness. While AVs hold the potential for improved safety, mobility, and environmental impact, poorly designed algorithms pose risks, leading to public distrust. Trust research focuses on technology-related aspects but overlooks trust within broader social and cultural contexts. Efforts are underway to understand algorithm design practices, acknowledging their potential unintended consequences. For example, Baumer (2017) advocates human-centred algorithm design (HCAD) to align with user perspectives and reduce risks. HCAD incorporates theoretical, participatory, and speculative approaches, emphasising user and stakeholder engagement. This aligns with broader calls for prioritising societal considerations in technology development (Stilgoe, 2013). The research in this thesis responds to these calls by integrating theories on trust and trustworthiness, autonomous vehicle development, and human-centred approaches in empirical investigations guided by the following research question: “How can human-centred approaches support the development of trustworthy intelligent vehicle technology?” This thesis approaches the question through design ethnography to ground the explorations in people’s real-life routines, practices and anticipations and demonstrate how design ethnographic techniques can infuse AV development with human-centred understandings of people’s trust in AVs. The studies reported in this thesis include a) interviews and participatory observations of algorithm designers, b) interviews and probing with residents, and c) staging collaborative, reflective practice through the design ethnographic materials and co-creation with citizens, city, academic and industry stakeholders, including AV algorithm designers.  Through these empirical explorations, this thesis suggests an answer to the research question by coining a novel and timely framework for intelligent vehicle development: trustworthy algorithm design (TAD). TAD demonstrates trustworthiness as an ongoing process, not just a measurable outcome from human-technology interactions. It calls to consider autonomous vehicle algorithms as construed through a network of stakeholders, practices, and technologies and, therefore, defines trustworthy algorithm design as a continuous collaborative learning and evolvement process of different disciplines and sectors. Furthermore, the TAD framework suggests that for autonomous vehicle algorithm design to be trustworthy, it must be responsive, interventional, intentional and transdisciplinary.  The TAD framework integrates ideas and strategies from different well-known trajectories of research in the field of responsible and human-centred technology development: Human-Centred Algorithm Design (Baumer, 2017), algorithms as culture (Seaver, 2017) and Responsible Innovation (Stilgoe et al., 2013). The thesis contributes to this field by empirically investigating how this integrated framework helps expand existing understandings of interactional trust in intelligent technologies and include the relevance of participatory processes of trustworthiness and how these processes are nurtured through cross-sector co-learning and design ethnographic materials.2017-03058_Vinnova / Trust in Intelligent Cars (TIC

    Towards trustworthy intelligent vehicle technology development

    Full text link
    This thesis addresses the unresolved issues of responsibility and accountability in autonomous vehicle (AV) development, advocating for human-centred approaches to enhance trustworthiness. While AVs hold the potential for improved safety, mobility, and environmental impact, poorly designed algorithms pose risks, leading to public distrust. Trust research focuses on technology-related aspects but overlooks trust within broader social and cultural contexts. Efforts are underway to understand algorithm design practices, acknowledging their potential unintended consequences. For example, Baumer (2017) advocates human-centred algorithm design (HCAD) to align with user perspectives and reduce risks. HCAD incorporates theoretical, participatory, and speculative approaches, emphasising user and stakeholder engagement. This aligns with broader calls for prioritising societal considerations in technology development (Stilgoe, 2013). The research in this thesis responds to these calls by integrating theories on trust and trustworthiness, autonomous vehicle development, and human-centred approaches in empirical investigations guided by the following research question: “How can human-centred approaches support the development of trustworthy intelligent vehicle technology?” This thesis approaches the question through design ethnography to ground the explorations in people’s real-life routines, practices and anticipations and demonstrate how design ethnographic techniques can infuse AV development with human-centred understandings of people’s trust in AVs. The studies reported in this thesis include a) interviews and participatory observations of algorithm designers, b) interviews and probing with residents, and c) staging collaborative, reflective practice through the design ethnographic materials and co-creation with citizens, city, academic and industry stakeholders, including AV algorithm designers.  Through these empirical explorations, this thesis suggests an answer to the research question by coining a novel and timely framework for intelligent vehicle development: trustworthy algorithm design (TAD). TAD demonstrates trustworthiness as an ongoing process, not just a measurable outcome from human-technology interactions. It calls to consider autonomous vehicle algorithms as construed through a network of stakeholders, practices, and technologies and, therefore, defines trustworthy algorithm design as a continuous collaborative learning and evolvement process of different disciplines and sectors. Furthermore, the TAD framework suggests that for autonomous vehicle algorithm design to be trustworthy, it must be responsive, interventional, intentional and transdisciplinary.  The TAD framework integrates ideas and strategies from different well-known trajectories of research in the field of responsible and human-centred technology development: Human-Centred Algorithm Design (Baumer, 2017), algorithms as culture (Seaver, 2017) and Responsible Innovation (Stilgoe et al., 2013). The thesis contributes to this field by empirically investigating how this integrated framework helps expand existing understandings of interactional trust in intelligent technologies and include the relevance of participatory processes of trustworthiness and how these processes are nurtured through cross-sector co-learning and design ethnographic materials.2017-03058_Vinnova / Trust in Intelligent Cars (TIC

    Tailoring Co-creation for Responsible Innovation : A Design Ethnographic Approach

    Full text link
    It is hard to predict the impact of technology on society before it is developed enough.For example, theissue can be attributedto theneed for morecross-sectoralcollaborationin the design process. However,a solution for anticipating such outcomes has beenproposed through the quadruple helixinnovationmodel,which statesthatthe involvement of government, academia, industry, andthepublicisessentialin innovation systems. The question of how this collaborationcan successfullybe staged to foreseepossible impacts is an empirical endeavour.This paper presentsan iterative case study of howethnographic material can be used to ongoingly tailorspeculativeco-creationto facilitateresponsibleinnovation (RI) principles. The result is reflected throughtwo lenses; the tools developed in the projectto facilitate co-creation activitiesand the stakeholder reflectionsevoked through these tools.

    Tailoring Co-creation for Responsible Innovation : A Design Ethnographic Approach

    Full text link
    It is hard to predict the impact of technology on society before it is developed enough.For example, theissue can be attributedto theneed for morecross-sectoralcollaborationin the design process. However,a solution for anticipating such outcomes has beenproposed through the quadruple helixinnovationmodel,which statesthatthe involvement of government, academia, industry, andthepublicisessentialin innovation systems. The question of how this collaborationcan successfullybe staged to foreseepossible impacts is an empirical endeavour.This paper presentsan iterative case study of howethnographic material can be used to ongoingly tailorspeculativeco-creationto facilitateresponsibleinnovation (RI) principles. The result is reflected throughtwo lenses; the tools developed in the projectto facilitate co-creation activitiesand the stakeholder reflectionsevoked through these tools.

    Trust in autonomous vehicles: insights from a Swedish suburb

    Full text link
    ABSTRACTThis paper investigates elements of trust in autonomous vehicles (AVs). We contextualise autonomous vehicles as part of people's everyday settings to extend previous understandings of trust and explore trust in autonomous vehicles in concrete social contexts. We conducted online co-creation workshops with 22 participants, using design probes to explore trust and AVs in relation to people's everyday lives. Using a socio-technical perspective, we show how trust and acceptance depend not only on the underlying AV technology but also – if not more so – on human-to-human relationships and real-life social circumstances. We argue that when investigating issues of trust and automation, the scope of analysis needs to be broadened to include a more complex socio-technical set of (human and non-human) agents, to extend from momentary human-computer interactions to a wider timescale, and be situated in concrete spaces, social networks, and situations
    corecore