7 research outputs found

    The impact of patient involvement on participant opinions of information sheets

    Get PDF
    Background Patient and public involvement (PPI) groups can provide valuable input to create more accessible study documents with less jargon. However, we don't know whether this procedure improves accessibility for potential participants. Aims We assessed whether participant information sheets were rated as more accessible after PPI review and which aspects of information sheets and study design were important to mental health patients compared with a control group with no mental health service use. Method This was a double-blind quasi-experimental study using a mixed-methods explanatory design. Patients and control participants quantitatively rated pre- and post-review documents. Semi-structured interviews were thematically analysed to gain qualitative feedback on opinions of information sheets and studies. Two-way multivariate analysis of variance was used to detect differences in ratings between pre- and post-review documents. Results We found no significant (P < 0.05) improvements in patient (n = 15) or control group (n = 21) ratings after PPI review. Patients and controls both rated PPI as of low importance in studies and considered the study rationale as most important. However, PPI was often misunderstood, with participants believing that it meant lay patients would take over the design and administration of the study. Qualitative findings highlight the importance of clear, friendly and visually appealing information sheets. Conclusions Researchers should be aware of what participants want to know about so they can create information sheets addressing these priorities, for example, explaining why the research is necessary. PPI is poorly understood by the wider population and efforts must be made to increase diversity in participation

    Comparing professional and consumer ratings of mental health apps : mixed methods study

    Get PDF
    Background: As the number of mental health apps has grown, increasing efforts have been focused on establishing quality tailored reviews. These reviews prioritize clinician and academic views rather than the views of those who use them, particularly those with lived experiences of mental health problems. Given that the COVID-19 pandemic has increased reliance on web-based and mobile mental health support, understanding the views of those with mental health conditions is of increasing importance. Objective: This study aimed to understand the opinions of people with mental health problems on mental health apps and how they differ from established ratings by professionals. Methods: A mixed methods study was conducted using a web-based survey administered between December 2020 and April 2021, assessing 11 mental health apps. We recruited individuals who had experienced mental health problems to download and use 3 apps for 3 days and complete a survey. The survey consisted of the One Mind PsyberGuide Consumer Review Questionnaire and 2 items from the Mobile App Rating Scale (star and recommendation ratings from 1 to 5). The consumer review questionnaire contained a series of open-ended questions, which were thematically analyzed and using a predefined protocol, converted into binary (positive or negative) ratings, and compared with app ratings by professionals and star ratings from app stores. Results: We found low agreement between the participants’ and professionals’ ratings. More than half of the app ratings showed disagreement between participants and professionals (198/372, 53.2%). Compared with participants, professionals gave the apps higher star ratings (3.58 vs 4.56) and were more likely to recommend the apps to others (3.44 vs 4.39). Participants’ star ratings were weakly positively correlated with app store ratings (r=0.32, P=.01). Thematic analysis found 11 themes, including issues of user experience, ease of use and interactivity, privacy concerns, customization, and integration with daily life. Participants particularly valued certain aspects of mental health apps, which appear to be overlooked by professional reviewers. These included functions such as the ability to track and measure mental health and providing general mental health education. The cost of apps was among the most important factors for participants. Although this is already considered by professionals, this information is not always easily accessible. Conclusions: As reviews on app stores and by professionals differ from those by people with lived experiences of mental health problems, these alone are not sufficient to provide people with mental health problems with the information they desire when choosing a mental health app. App rating measures must include the perspectives of mental health service users to ensure ratings represent their priorities. Additional work should be done to incorporate the features most important to mental health service users into mental health apps

    How to make study documents clear and relevant : the impact of patient involvement

    Get PDF
    Background Patient and public involvement can improve study outcomes, but little data have been collected on why this might be. We investigated the impact of the Feasibility and Support to Timely Recruitment for Research (FAST-R) service, made up of trained patients and carers who review research documents at the beginning of the research pipeline. Aims To investigate the impact of the FAST-R service, and to provide researchers with guidelines to improve study documents. Method A mixed-methods design assessing changes and suggestions in documents submitted to the FAST-R service from 2011 to 2020. Quantitative measures were readability, word count, jargon words before and after review, the effects over time and if changes were implemented. We also asked eight reviewers to blindly select a pre- or post-review participant information sheet as their preferred version. Reviewers’ comments were analysed qualitatively via thematic analysis. Results After review, documents were longer and contained less jargon, but did not improve readability. Jargon and the number of suggested changes increased over time. Participant information sheets had the most suggested changes. Reviewers wanted clarity, better presentation and felt that documents lacked key information such as remuneration, risks involved and data management. Six out of eight reviewers preferred the post-review participant information sheet. FAST-R reviewers provided jargon words and phrases with alternatives for researchers to use. Conclusions Longer documents are acceptable if they are clear, with jargon explained or substituted. The highlighted barriers to true informed consent are not decreasing, although this study has suggestions for improving research document accessibility

    Codeveloping and evaluating a campaign to reduce dementia misconceptions on Twitter : machine learning study

    Get PDF
    Background: Dementia misconceptions on Twitter can have detrimental or harmful effects. Machine learning (ML) models codeveloped with carers provide a method to identify these and help in evaluating awareness campaigns. Objective: This study aimed to develop an ML model to distinguish between misconceptions and neutral tweets and to develop, deploy, and evaluate an awareness campaign to tackle dementia misconceptions. Methods: Taking 1414 tweets rated by carers from our previous work, we built 4 ML models. Using a 5-fold cross-validation, we evaluated them and performed a further blind validation with carers for the best 2 ML models; from this blind validation, we selected the best model overall. We codeveloped an awareness campaign and collected pre-post campaign tweets (N=4880), classifying them with our model as misconceptions or not. We analyzed dementia tweets from the United Kingdom across the campaign period (N=7124) to investigate how current events influenced misconception prevalence during this time. Results: A random forest model best identified misconceptions with an accuracy of 82% from blind validation and found that 37% of the UK tweets (N=7124) about dementia across the campaign period were misconceptions. From this, we could track how the prevalence of misconceptions changed in response to top news stories in the United Kingdom. Misconceptions significantly rose around political topics and were highest (22/28, 79% of the dementia tweets) when there was controversy over the UK government allowing to continue hunting during the COVID-19 pandemic. After our campaign, there was no significant change in the prevalence of misconceptions. Conclusions: Through codevelopment with carers, we developed an accurate ML model to predict misconceptions in dementia tweets. Our awareness campaign was ineffective, but similar campaigns could be enhanced through ML to respond to current events that affect misconceptions in real time

    Investigation of carers’ perspectives of Dementia misconceptions on Twitter : focus group study

    Get PDF
    Background: Dementia misconceptions on social media are common, with negative effects on people with the condition, their carers, and those who know them. This study codeveloped a thematic framework with carers to understand the forms these misconceptions take on Twitter. Objective: The aim of this study is to identify and analyze types of dementia conversations on Twitter using participatory methods. Methods: A total of 3 focus groups with dementia carers were held to develop a framework of dementia misconceptions based on their experiences. Dementia-related tweets were collected from Twitter’s official application programming interface using neutral and negative search terms defined by the literature and by carers (N=48,211). A sample of these tweets was selected with equal numbers of neutral and negative words (n=1497), which was validated in individual ratings by carers. We then used the framework to analyze, in detail, a sample of carer-rated negative tweets (n=863). Results: A total of 25.94% (12,507/48,211) of our tweet corpus contained negative search terms about dementia. The carers’ framework had 3 negative and 3 neutral categories. Our thematic analysis of carer-rated negative tweets found 9 themes, including the use of weaponizing language to insult politicians (469/863, 54.3%), using dehumanizing or outdated words or statements about members of the public (n=143, 16.6%), unfounded claims about the cures or causes of dementia (n=11, 1.3%), or providing armchair diagnoses of dementia (n=21, 2.4%). Conclusions: This is the first study to use participatory methods to develop a framework that identifies dementia misconceptions on Twitter. We show that misconceptions and stigmatizing language are not rare. They manifest through minimizing and underestimating language. Web-based campaigns aiming to reduce discrimination and stigma about dementia could target those who use negative vocabulary and reduce the misconceptions that are being propagated, thus improving general awareness

    Investigating mental health service user views of stigma on Twitter during COVID-19 : a mixed-methods study

    Get PDF
    Background: Mental health stigma on social media is well studied, but not from the perspective of mental health service users. Coronavirus disease-19 (COVID-19) increased mental health discussions and may have impacted stigma. Objectives: (1) to understand how service users perceive and define mental health stigma on social media; (2) how COVID-19 shaped mental health conversations and social media use. Methods: We collected 2,700 tweets related to seven mental health conditions: schizophrenia, depression, anxiety, autism, eating disorders, OCD, and addiction. Twenty-seven service users rated them as stigmatising or neutral, followed by focus group discussions. Focus group transcripts were thematically analysed. Results: Participants rated 1,101 tweets (40.8%) as stigmatising. Tweets related to schizophrenia were most frequently classed as stigmatising (411/534, 77%). Tweets related to depression or anxiety were least stigmatising (139/634, 21.9%). A stigmatising tweet depended on perceived intention and context but some words (e.g. “psycho”) felt stigmatising irrespective of context. Discussion: The anonymity of social media seemingly increased stigma, but COVID-19 lockdowns improved mental health literacy. This is the first study to qualitatively investigate service users' views of stigma towards various mental health conditions on Twitter and we show stigma is common, particularly towards schizophrenia. Service user involvement is vital when designing solutions to stigma
    corecore