13 research outputs found

    Potential of using visual imagery to revolutionise measurement of emotional health

    Get PDF
    Appropriate measurement of emotional health by all those working with children and young people is an increasing focus for professional practice. Most of the tools used for assessment or self-assessment of emotional health were designed in the mid-20th century using language and technology derived from pen and paper written texts. But, are they fit for purpose in an age of pervasive computing with increasingly rich audio-visual media devices being in the hands of young people? This thought piece explores how the increased use of visual imagery, especially forms that can be viewed or created on digital devices might provide a way forward for more effective measuring of emotional health; including smiley faces, other emojis and other potential forms of visual imagery. The authors bring together perspectives from healthcare, counselling, youth advocacy, academic research, primary care and school based mental health support to explore these issues

    Measuring Online Wellbeing: A Scoping Review of Subjective Wellbeing Measures

    Get PDF
    With the increasing importance of the internet to our everyday lives, questions are rightly being asked about how its’ use affects our wellbeing. It is important to be able to effectively measure the effects of the online context as it allows us to assess the impact of specific online contexts on wellbeing that may not apply to offline wellbeing. This paper describes a scoping review of English language, peer-reviewed articles published in MEDLINE, EMBASE, and PsychInfo between 1st January 2015 and 31st December 2019 to identify what measures are used to assess subjective wellbeing, and in particular to identify any measures used in the online context. 240 studies were identified; 160 studies were removed by abstract screening and 17 studies were removed by full-text screening, leaving 63 included studies. 56 subjective wellbeing scales were identified with 18 excluded and 38 included for further analysis. Only 1 study was identified researching online wellbeing and no specific online wellbeing scale was found. Therefore, common features of the existing scales, such as the number and type of questions are compared to offer recommendations for building an online wellbeing scale. Such a scale is recommended to be between 3 and 20 questions, using mainly 5 point Likert or Likert-like scales to measure at least positive and negative affect, and ideally life satisfaction, and to use mainly subjective evaluation. Further research is needed to establish how these findings for the offline world effectively translate into an online measure of wellbeing

    Try to see it my way: exploring the co-design of visual presentations of well-being through a workshop process

    Get PDF
    Aims: A 10-month project funded by the NewMind network sought to develop the specification of a visualisation toolbox that could be applied on digital platforms (web- or app- based) to support adults with lived experience of mental health difficulties to present and track their personal well-being in an multi-media format.Methods: A participant co-design methodology, Double Diamond from the DesignCouncil (Great Britain),was used consisting of 4 phases: Discover - a set of literature and app searches of well-being and health visualisation material; Define – an initial workshop with participants with lived experience of mental health problems to discuss well-being and visualisation techniques and to share personal visualisations; Develop – a second workshop to add detail to personal visualisations e.g., forms of media to be employed, degree of control over sharing. Deliver – to disseminate the learning from the exercise. Results: Two design workshops were held in December 2017 and April 2018 with 13 and 12 experts-by-experience involved respectively, including 2 peer researchers (co-authors) and 2 individual-carer dyads in each workshop, with over 50% of those being present in both workshops. Twenty detailed visualisations were produced, the majority focussing on highly personal and detailed presentations of well-being.Discussion: Whilst participants concurred on a range of typical dimensions of well-being, the individual visualisations generated were in contrast to the techniques currently employed by existing digital well-being apps and there was a great diversity in preference for different visualisation types. Participants considered personal visualisations to be useful as self-administered interventions or as a step towards seeking help, as well as being tools for self-appraisal.Conclusions: The results suggest that an authoring approach using existing apps may provide the high degree of flexibility required. Training on such tools, delivered via a module on a recovery college course, could be offered

    TrustScapes: A visualisation tool to capture stakeholders' concerns and recommendations about data protection, algorithmic bias, and online safety

    Get PDF
    This paper presents a new methodological approach, TrustScapes, an open access tool designed to identify and visualise stakeholders’ concerns and policy recommendations on data protection, algorithmic bias, and online safety for a fairer and more trustworthy online world. We first describe how the tool was co-created with young people and other stakeholders through a series of workshops. We then present two sets of TrustScapes focus groups to illustrate how the tool can be used, and the data analysed. The paper then provides the methodological insights, including the strengths of the TrustScapes and the lessons for future research using TrustScapes. A key strength of this method is that it allows people to visualise their ideas and thoughts on the worksheet, using the keywords and sketches provided. The flexibility in the mode of delivery is another strength of the TrustScapes method. The TrustScapes focus groups can be conducted in a relatively short time (1.5–2 hours), either in person or online depending on the participants’ needs, geological locations, and practicality. Our experience with the TrustScapes offers some lessons (related to the data collection and analysis) for researchers who wish to use this method in the future. Finally, we describe how the outcomes from the TrustScapes focus groups should help to inform future policy decisions

    TrustScapes: A Visualisation Tool to Capture Stakeholders’ Concerns and Recommendations About Data Protection, Algorithmic Bias, and Online Safety

    Get PDF
    This paper presents a new methodological approach, TrustScapes, an open access tool designed to identify and visualise stakeholders’ concerns and policy recommendations on data protection, algorithmic bias, and online safety for a fairer and more trustworthy online world. We first describe how the tool was co-created with young people and other stakeholders through a series of workshops. We then present two sets of TrustScapes focus groups to illustrate how the tool can be used, and the data analysed. The paper then provides the methodological insights, including the strengths of the TrustScapes and the lessons for future research using TrustScapes. A key strength of this method is that it allows people to visualise their ideas and thoughts on the worksheet, using the keywords and sketches provided. The flexibility in the mode of delivery is another strength of the TrustScapes method. The TrustScapes focus groups can be conducted in a relatively short time (1.5–2hours), either in person or online depending on the participants’ needs, geological locations, and practicality. Our experience with the TrustScapes offers some lessons (related to the data collection and analysis) for researchers who wish to use this method in the future. Finally, we describe how the outcomes from the TrustScapes focus groups should help to inform future policy decisions

    Direct to public peer support and e-therapy program versus information to aid self-management of depression and anxiety: protocol for a randomized controlled trial

    Get PDF
    Regardless of geography or income, effective help for depression and anxiety only reaches a small proportion of those who might benefit from it. The scale of the problem suggests a role for effective, safe, anonymised public health driven online services such as Big White Wall which offers immediate peer support at low cost. Objectives: Using RE-AIM methodology we will aim to determine the population reach, effectiveness, cost effectiveness, and barriers and drivers to implementation of Big White Wall (BWW) compared to online information compiled by the UK’s National Health Service (NHS Choices Moodzone) in people with probable mild to moderate depression and anxiety disorder. Method/Design: A pragmatic, parallel group, single blind RCT is being conducted using a fully automated trial website in which eligible participants are randomised to receive either 6 months access to BWW or signposted to the NHS Moodzone site. The recruitment of 2200 people to the study will be facilitated by a public health engagement campaign involving general marketing and social media, primary care clinical champions, healthcare staff, large employers and third sector groups. People will refer themselves to the study and will be eligible if they are over 16 years, have probable mild to moderate depression or anxiety disorders and have access to the internet. The primary outcome will be the Warwick-Edinburgh Mental Well-being Scale at six weeks. We will also explore the reach, maintenance, cost-effectiveness, barriers and drivers to implementation and possible mechanisms of actions using a range of qualitative and quantitative methods. Discussion: This will be the first fully digital trial of a direct to public on line peer support programme for common mental disorders. The potential advantages of adding this to current NHS mental health services and the challenges of designing a public health campaign and randomised controlled trial of two digital interventions using a fully automated digital enrolment and data collection process are considered for people with depression and anxiety

    Developing an Automated Assessment of In-session Patient Activation for Psychological Therapy: Codevelopment Approach

    Get PDF
    Background: Patient activation is defined as a patient’s confidence and perceived ability to manage their own health. Patient activation has been a consistent predictor of long-term health and care costs, particularly for people with multiple long-term health conditions. However, there is currently no means of measuring patient activation from what is said in health care consultations. This may be particularly important for psychological therapy because most current methods for evaluating therapy content cannot be used routinely due to time and cost restraints. Natural language processing (NLP) has been used increasingly to classify and evaluate the contents of psychological therapy. This aims to make the routine, systematic evaluation of psychological therapy contents more accessible in terms of time and cost restraints. However, comparatively little attention has been paid to algorithmic trust and interpretability, with few studies in the field involving end users or stakeholders in algorithm development. Objective: This study applied a responsible design to use NLP in the development of an artificial intelligence model to automate the ratings assigned by a psychological therapy process measure: the consultation interactions coding scheme (CICS). The CICS assesses the level of patient activation observable from turn-by-turn psychological therapy interactions. Methods: With consent, 128 sessions of remotely delivered cognitive behavioral therapy from 53 participants experiencing multiple physical and mental health problems were anonymously transcribed and rated by trained human CICS coders. Using participatory methodology, a multidisciplinary team proposed candidate language features that they thought would discriminate between high and low patient activation. The team included service-user researchers, psychological therapists, applied linguists, digital research experts, artificial intelligence ethics researchers, and NLP researchers. Identified language features were extracted from the transcripts alongside demographic features, and machine learning was applied using k-nearest neighbors and bagged trees algorithms to assess whether in-session patient activation and interaction types could be accurately classified. Results: The k-nearest neighbors classifier obtained 73% accuracy (82% precision and 80% recall) in a test data set. The bagged trees classifier obtained 81% accuracy for test data (87% precision and 75% recall) in differentiating between interactions rated high in patient activation and those rated low or neutral. Conclusions: Coproduced language features identified through a multidisciplinary collaboration can be used to discriminate among psychological therapy session contents based on patient activation among patients experiencing multiple long-term physical and mental health conditions

    Identifying research priorities for digital technology in mental healthcare: results of the James Lind Alliance Priority Setting Partnership

    Get PDF
    Digital technology, including the use of internet, smartphones and wearables, holds the promise to bridge the mental health treatment gap by offering a more accessible, potentially less stigmatising, flexible and tailored approach to mental healthcare. However, the evidence-base for digital mental health interventions and demonstration of clinical- and cost-effectiveness in real-world settings remains inadequate. The James Lind Alliance (JLA) Priority Setting Partnership (PSP) for digital technology in mental healthcare was established to identify research priorities that reflected the perspectives and unmet needs of people with lived experience of mental health problems, mental health service users, their carers, and healthcare practitioners. 644 participants contributed over 1350 separate questions, which were reduced by qualitative thematic analysis into six overarching themes. Following removal of out of scope questions and a comprehensive search of existing evidence, 134 questions were verified as uncertainties suitable for research. These questions were then ranked online and in workshops by 628 participants to produce a shortlist of 26. The top ten research priorities were identified by consensus at a stakeholder workshop. The top ten priorities should inform research policy and funding in this field. Identified priorities primarily relate to the safety and efficacy of digital technology interventions in comparison with face to face interventions, evidence of population reach, mechanisms of therapeutic change, and how best to optimize the effectiveness of digital interventions in combination with human support

    Involving psychological therapy stakeholders in responsible research to develop an automated feedback tool: Learnings from the ExTRAPPOLATE project

    Get PDF
    Understanding stakeholders’ views on novel autonomous systems in healthcare is essential to ensure these are not abandoned after substantial investment has been made. The ExTRAPPOLATE project applied the principles of Responsible Research and Innovation (RRI) in the development of an automated feedback system for psychological therapists, ‘AutoCICS’. A Patient and Practitioner Reference Group (PPRG) was convened over three online workshops to inform the system's development. Iterative workshops allowed proposed changes to the system (based on stakeholder comments) to be scrutinized. The PPRG reference group provided valuable insights, differentiated by role, including concerns and suggestions related to the applicability and acceptability of the system to different patients, as well as ethical considerations. The RRI approach enabled the anticipation of barriers to use, reflection on stakeholders’ views, effective engagement with stakeholders, and action to revise the design and proposed use of the system prior to testing in future planned feasibility and effectiveness studies. Many best practices and learnings can be taken from the application of RRI in the development of the AutoCICS system

    Building Trust – The People’s Panel for AI

    No full text
    This paper describes The People’s Panel for AI – a mechanism to build public trust in AI products and services from conceptualization to deployment. To increase public awareness of how AI and data-driven systems are affecting the lives of ordinary people, a series of Artificial Intelligence Roadshows were delivered in community centers. Community members were recruited to the People’s Panel and completed two days of training about key aspects of data, AI and ethics, including learning a technique for exploring ethical aspects of new technologies (consequence scanning). As part of a pilot study, four People’s Panel sessions were held where tech businesses and researchers pitched their ideas and discussed questions and concerns of the panel members. Through participating in the panel, panel members reported an increase in confidence in being able to question businesses and businesses heard a diverse stakeholder voice on the ethical impacts of their products / services, leading to change
    corecore