2 research outputs found
Identifying schizophrenia stigma on Twitter : a proof of principle model using service user supervised machine learning
Stigma has negative effects on people with mental health problems by making them less likely to seek help. We develop a proof of principle service user supervised machine learning pipeline to identify stigmatising tweets reliably and understand the prevalence of public schizophrenia stigma on Twitter. A service user group advised on the machine learning model evaluation metric (fewest false negatives) and features for machine learning. We collected 13,313 public tweets on schizophrenia between January and May 2018. Two service user researchers manually identified stigma in 746 English tweets; 80% were used to train eight models, and 20% for testing. The two models with fewest false negatives were compared in two service user validation exercises, and the best model used to classify all extracted public English tweets. Tweets classed as stigmatising by service users were more negative in sentiment (t (744) = 12.02, p < 0.001 [95% CI: 0.196–0.273]). Our linear Support Vector Machine was the best performing model with fewest false negatives and higher service user validation. This model identified public stigma in 47% of English tweets (n5,676) which were more negative in sentiment (t (12,143) = 64.38, p < 0.001 [95% CI: 0.29–0.31]). Machine learning can identify stigmatising tweets at large scale, with service user involvement. Given the prevalence of stigma, there is an urgent need for education and online campaigns to reduce it. Machine learning can provide a real time metric on their success
Terms and conditions apply : critical issues for readability and jargon in mental health depression apps
Background
Mental health services are turning to technology to ease the resource burden, but privacy policies are hard to understand potentially compromising consent for people with mental health problems. The FDA recommends a reading grade of 8.
Objective
To investigate and improve the accessibility and acceptability of mental health depression app privacy policies.
Methods
A mixed methods study using quantitative and qualitative data to improve the accessibility of app privacy policies. Service users completed assessments and focus groups to provide information on ways to improve privacy policy accessibility, including identifying and rewording jargon. This was supplemented by comparisons of mental health depression apps with social media, music and finance apps using readability analyses and examining whether GDPR affected accessibility.
Results
Service users provided a detailed framework for increasing accessibility that emphasised having critical information for consent. Quantitatively, most app privacy policies were too long and complicated for ensuring informed consent (mental health apps mean reading grade = 13.1 (SD = 2.44)). Their reading grades were no different to those for other services. Only 3 mental health apps had a grade 8 or less and 99% contained service user identified jargon. Mental health app privacy policies produced for GDPR weren't more readable and were longer.
Conclusions
Apps specifically aimed at people with mental health difficulties are not accessible and even those that fulfilled the FDA's recommendation for reading grade contained jargon words. Developers and designers can increase accessibility by following a few rules and should, before launching, check whether the privacy policy can be understood