27 research outputs found

    An exploration of how trust online relates to psychological and subjective wellbeing

    Get PDF
    Internet users often report feelings of stress, anxiety, and a lack of control, often related to uncertainty about the use of algorithms and autonomous systems (AS) behind what they encounter. This may lead to a loss of trust in the services, content, and websites people encounter online. In order to ensure that the online world contributes to human flourishing, it is important to understand how both trust and wellbeing manifest online. This paper describes an online questionnaire exploring the relationships between factors related to trust and psychological and subjective wellbeing, as well as online activity and digital confidence. Results suggest that trust is important to people online but in practice is quite low, and that positive measures of wellbeing outweigh the negative, but more could be done to design AS in a responsible, trustworthy, and wellbeing-affirming manner, particularly considering ways to enhance human autonomy and competence. Suggestions are made for how designers might consider trust and wellbeing when approaching the creation and presentation of online AS

    Authority as an interactional achievement: Exploring Deference to Smart Devices in hospital-based resuscitation

    Get PDF
    Over the years, healthcare has been an important domain for CSCW research. One significant theme carried through this body of work concerns how hospital workers coordinate their work both spatially and temporally. Much has been made of the coordinative roles played by the natural rhythms present in hospital life, and by webs of mundane artefacts such as whiteboards, post-it notes and medical records. This paper draws upon the coordinating role of rhythms and artefacts to explore the nested rhythms of the Cardio-Pulmonary Resuscitation (CPR) protocol conducted to restore the proper heart rhythm in a patient who has suffered a cardiac arrest. We are interested in how the teams delivering CPR use various ‘smart’ assistive devices. The devices contain encoded versions of the CPR protocol and are able to sense (in a limited way) the situation in order to give instructions or feedback to the team. Using an approach informed by ethnomethodology and conversation analysis (EM/CA) we analysed video of trainee nurses using these devices as they delivered CPR in dramatized training scenarios. This analysis helped us to understand concepts such as autonomy and authority as interactional accomplishments, thus filling a gap in CSCW literature, which often glosses over how authority is formed and how it is exercised in medical teams. It also helps us consider how to respond to devices that are becoming more active in that they are being increasingly imbued with the ability to sense, discriminate and direct activity in medical settings

    Editorial responsibilities arising from personalization algorithms

    Get PDF
    Social media platforms routinely apply personalization algorithms to ensure the content presented to the user is relevant and engaging. These algorithms are designed to prioritize and make some pieces of information more visible than others. However, there is typically no transparency in the criteria used for ranking the information, and more importantly, the consequences that the resulting content could have on users. Social media platforms argue that because they do not alter content, just reshape the way it is presented to the user, they are merely technological companies (not media companies). We highlight the value of a Responsible Research and innovation (RRI) approach to the design, implementation and use of personalization algorithms. Based on this and in combination with reasoned analysis and the use of case studies, we suggest that social media platforms should take editorial responsibility and adopt a code of ethics to promote corporate social responsibility

    “It would be pretty immoral to choose a random algorithm”:Opening up algorithmic interpretability and transparency

    Get PDF
    Purpose The purpose of this paper is to report on empirical work conducted to open up algorithmic interpretability and transparency. In recent years, significant concerns have arisen regarding the increasing pervasiveness of algorithms and the impact of automated decision-making in our lives. Particularly problematic is the lack of transparency surrounding the development of these algorithmic systems and their use. It is often suggested that to make algorithms more fair, they should be made more transparent, but exactly how this can be achieved remains unclear. Design/methodology/approach An empirical study was conducted to begin unpacking issues around algorithmic interpretability and transparency. The study involved discussion-based experiments centred around a limited resource allocation scenario which required participants to select their most and least preferred algorithms in a particular context. In addition to collecting quantitative data about preferences, qualitative data captured participants’ expressed reasoning behind their selections. Findings Even when provided with the same information about the scenario, participants made different algorithm preference selections and rationalised their selections differently. The study results revealed diversity in participant responses but consistency in the emphasis they placed on normative concerns and the importance of context when accounting for their selections. The issues raised by participants as important to their selections resonate closely with values that have come to the fore in current debates over algorithm prevalence. Originality/value This work developed a novel empirical approach that demonstrates the value in pursuing algorithmic interpretability and transparency while also highlighting the complexities surrounding their accomplishment

    TrustScapes: A Visualisation Tool to Capture Stakeholders’ Concerns and Recommendations About Data Protection, Algorithmic Bias, and Online Safety

    Get PDF
    This paper presents a new methodological approach, TrustScapes, an open access tool designed to identify and visualise stakeholders’ concerns and policy recommendations on data protection, algorithmic bias, and online safety for a fairer and more trustworthy online world. We first describe how the tool was co-created with young people and other stakeholders through a series of workshops. We then present two sets of TrustScapes focus groups to illustrate how the tool can be used, and the data analysed. The paper then provides the methodological insights, including the strengths of the TrustScapes and the lessons for future research using TrustScapes. A key strength of this method is that it allows people to visualise their ideas and thoughts on the worksheet, using the keywords and sketches provided. The flexibility in the mode of delivery is another strength of the TrustScapes method. The TrustScapes focus groups can be conducted in a relatively short time (1.5–2hours), either in person or online depending on the participants’ needs, geological locations, and practicality. Our experience with the TrustScapes offers some lessons (related to the data collection and analysis) for researchers who wish to use this method in the future. Finally, we describe how the outcomes from the TrustScapes focus groups should help to inform future policy decisions

    TrustScapes: A visualisation tool to capture stakeholders' concerns and recommendations about data protection, algorithmic bias, and online safety

    Get PDF
    This paper presents a new methodological approach, TrustScapes, an open access tool designed to identify and visualise stakeholders’ concerns and policy recommendations on data protection, algorithmic bias, and online safety for a fairer and more trustworthy online world. We first describe how the tool was co-created with young people and other stakeholders through a series of workshops. We then present two sets of TrustScapes focus groups to illustrate how the tool can be used, and the data analysed. The paper then provides the methodological insights, including the strengths of the TrustScapes and the lessons for future research using TrustScapes. A key strength of this method is that it allows people to visualise their ideas and thoughts on the worksheet, using the keywords and sketches provided. The flexibility in the mode of delivery is another strength of the TrustScapes method. The TrustScapes focus groups can be conducted in a relatively short time (1.5–2 hours), either in person or online depending on the participants’ needs, geological locations, and practicality. Our experience with the TrustScapes offers some lessons (related to the data collection and analysis) for researchers who wish to use this method in the future. Finally, we describe how the outcomes from the TrustScapes focus groups should help to inform future policy decisions

    Building Trust in Human-Machine Partnerships

    Get PDF
    Artificial Intelligence (AI) is bringing radical change to our lives. Fostering trust in this technology requires the technology to be transparent, and one route to transparency is to make the decisions that are reached by AIs explainable to the humans that interact with them. This paper lays out an exploratory approach to developing explainability and trust, describing the specific technologies that we are adopting, the social and organizational context in which we are working, and some of the challenges that we are addressing
    corecore