1,555 research outputs found

    Triaging Content Severity in Online Mental Health Forums

    Get PDF
    Mental health forums are online communities where people express their issues and seek help from moderators and other users. In such forums, there are often posts with severe content indicating that the user is in acute distress and there is a risk of attempted self-harm. Moderators need to respond to these severe posts in a timely manner to prevent potential self-harm. However, the large volume of daily posted content makes it difficult for the moderators to locate and respond to these critical posts. We present a framework for triaging user content into four severity categories which are defined based on indications of self-harm ideation. Our models are based on a feature-rich classification framework which includes lexical, psycholinguistic, contextual and topic modeling features. Our approaches improve the state of the art in triaging the content severity in mental health forums by large margins (up to 17% improvement over the F-1 scores). Using the proposed model, we analyze the mental state of users and we show that overall, long-term users of the forum demonstrate a decreased severity of risk over time. Our analysis on the interaction of the moderators with the users further indicates that without an automatic way to identify critical content, it is indeed challenging for the moderators to provide timely response to the users in need.Comment: Accepted for publication in Journal of the Association for Information Science and Technology (2017

    Social Workers\u27 Duty to Report Dangers via Social Media: A Systematic Review

    Get PDF
    This systematic review aimed to evaluate social workers\u27 obligations to report suicidal or homicidal posts on social media. Inclusion and exclusion criteria were developed and multiple databases were searched for relevant literature. Of the literature searched, 26 articles were of use to the study. Based on the findings, there was a lack of concrete information regarding social workers obligations and mandated reporting guidelines of internet activity. The topic has not been studied to the degree that was required by this study. Current statutes and regulations would need to be updated to address the issue of social media use and suicide/homicide risk. More policies need to be developed in order to help those with mental illnesses that are a danger to themselves or others and it would work to help social workers provide comprehensive treatment for clients

    Social Workers’ Duty to Report Dangers via Social Media: A Systematic Review

    Get PDF
    This systematic review aimed to evaluate social workers’ obligations to report suicidal or homicidal posts on social media. Inclusion and exclusion criteria were developed and multiple databases were searched for relevant literature. Of the literature searched, 26 articles were of use to the study. Based on the findings, there was a lack of concrete information regarding social workers obligations and mandated reporting guidelines of internet activity. The topic has not been studied to the degree that was required by this study. Current statutes and regulations would need to be updated to address the issue of social media use and suicide/homicide risk. More policies need to be developed in order to help those with mental illnesses that are a danger to themselves or others and it would work to help social workers provide comprehensive treatment for clients

    Systems Engineering Approaches to Minimize the Viral Spread of Social Media Challenges

    Get PDF
    Recently, adolescents’ and young adults’ use of social media has significantly increased. While this new landscape of cyberspace offers young internet users many benefits, it also exposes them to numerous risks. One such phenomenon receiving limited research attention is the advent and propagation of viral social media challenges. Several of these challenges entail self-harming behavior, which combined with their viral nature, poses physical and psychological risks for the participants and the viewers. One example of these viral social media challenges that could potentially be propagated through social media is the Blue Whale Challenge (BWC). In the initial study we investigate how people portray the BWC on social media and the potential harm this may pose to vulnerable populations. We first used a thematic content analysis approach, coding 60 publicly posted YouTube videos, 1,112 comments on those videos, and 150 Twitter posts that explicitly referenced BWC. We then deductively coded the YouTube videos based on the Suicide Prevention Resource Center (SPRC) Messaging guidelines. We found that social media users post about BWC to raise awareness and discourage participating, express sorrow for the participants, criticize the participants, or describe a relevant experience. Moreover, we found most of the videos on YouTube violate at least 50% of the SPRC safe and effective messaging guidelines. These posts might have the problematic effect of normalizing the BWC through repeated exposure, modeling, and reinforcement of self-harming and suicidal behavior, especially among vulnerable populations, such as adolescents. A second study conducted a systematic content analysis of 180 YouTube videos (~813 minutes total length), 3,607 comments on those YouTube videos, and 450 Twitter posts to explore the portrayal and social media users’ perception of three viral social media-based challenges (i.e., BWC, Tide Pod Challenge (TPC), and Amyotrophic Lateral Sclerosis (ALS) Ice Bucket Challenge (IBC)). We identified five common themes across the challenges, including: education and awareness, criticizing the participants and blaming the victims, detailed information about the participants, giving viewers a tutorial on how to participate, and understanding seemingly senseless online behavior. We found that the purpose of posting about an online challenge varies based on the inherent risk involved in the challenge itself. However, analysis of the YouTube comments showed that previous experience and exposure to online challenges appear to affect the perception of other challenges in the future. The third study investigated the beliefs that lead adolescents and young adults to participate in these activities by analyzing the ALS IBC to represent challenges with minimally harmful behaviors intended to support philanthropic endeavors and the Cinnamon Challenge (CC), to represent those involving harmful behaviors that may culminate in injury. We conducted a retrospective quantitative study with a total of 471 participants between the ages of 13 and 35 who either had participated in the ALS IBC or the CC or had never participated in any online challenge. We used binomial logistic regression models to classify those who participated in ALS IBC or CC versus those who didn’t with the beliefs from the Integrated Behavioral Model (IBM) as predictors. Our findings showed that both CC and ALS IBC participants had significantly greater positive emotional responses, value for the outcomes of the challenge, and expectation of the public to participate in the challenge in comparison to individuals who never participated in any challenge. In addition, only CC participants perceived positive public opinion about the challenge and perceived the challenge to be easy with no harmful consequences, in comparison to individuals who never participated in any challenge. The findings from this study were used to develop interventions based on knowledge of how the specific items making up each construct apply specifically to social media challenges. In the last study, we showed how agent-based modeling (ABM) might be used to investigate the effect of educational intervention programs to reduce social media challenges participation at multiple levels- family, school, and community. In addition, we showed how the effect of these educational based interventions can be compared to social media-based policy interventions. Our model takes into account the “word of mouth” effect of these interventions which could either decrease participation in social media challenge further than expected or unintentionally cause others to participate

    Detecting Suicidality in Arabic Tweets Using Machine Learning and Deep Learning Techniques

    Full text link
    Social media platforms have revolutionized traditional communication techniques by enabling people globally to connect instantaneously, openly, and frequently. People use social media to share personal stories and express their opinion. Negative emotions such as thoughts of death, self-harm, and hardship are commonly expressed on social media, particularly among younger generations. As a result, using social media to detect suicidal thoughts will help provide proper intervention that will ultimately deter others from self-harm and committing suicide and stop the spread of suicidal ideation on social media. To investigate the ability to detect suicidal thoughts in Arabic tweets automatically, we developed a novel Arabic suicidal tweets dataset, examined several machine learning models, including Na\"ive Bayes, Support Vector Machine, K-Nearest Neighbor, Random Forest, and XGBoost, trained on word frequency and word embedding features, and investigated the ability of pre-trained deep learning models, AraBert, AraELECTRA, and AraGPT2, to identify suicidal thoughts in Arabic tweets. The results indicate that SVM and RF models trained on character n-gram features provided the best performance in the machine learning models, with 86% accuracy and an F1 score of 79%. The results of the deep learning models show that AraBert model outperforms other machine and deep learning models, achieving an accuracy of 91\% and an F1-score of 88%, which significantly improves the detection of suicidal ideation in the Arabic tweets dataset. To the best of our knowledge, this is the first study to develop an Arabic suicidality detection dataset from Twitter and to use deep-learning approaches in detecting suicidality in Arabic posts

    Exploring the Risk of Suicide in Real Time on Spanish Twitter: Observational Study

    Get PDF
    Background:Social media is now a common context wherein people express their feelings in real time. These platforms are increasingly showing their potential to detect the mental health status of the population. Suicide prevention is a global health priority and efforts toward early detection are starting to develop, although there is a need for more robust research. Objective:We aimed to explore the emotional content of Twitter posts in Spanish and their relationships with severity of the risk of suicide at the time of writing the tweet. Methods:Tweets containing a specific lexicon relating to suicide were filtered through Twitter's public application programming interface. Expert psychologists were trained to independently evaluate these tweets. Each tweet was evaluated by 3 experts. Tweets were filtered by experts according to their relevance to the risk of suicide. In the tweets, the experts evaluated: (1) the severity of the general risk of suicide and the risk of suicide at the time of writing the tweet (2) the emotional valence and intensity of 5 basic emotions; (3) relevant personality traits; and (4) other relevant risk variables such as helplessness, desire to escape, perceived social support, and intensity of suicidal ideation. Correlation and multivariate analyses were performed. Results:Of 2509 tweets, 8.61% (n=216) were considered to indicate suicidality by most experts. Severity of the risk of suicide at the time was correlated with sadness (ρ=0.266; P<.001), joy (ρ=–0.234; P=.001), general risk (ρ=0.908; P<.001), and intensity of suicidal ideation (ρ=0.766; P<.001). The severity of risk at the time of the tweet was significantly higher in people who expressed feelings of defeat and rejection (P=.003), a desire to escape (P<.001), a lack of social support (P=.03), helplessness (P=.001), and daily recurrent thoughts (P=.007). In the multivariate analysis, the intensity of suicide ideation was a predictor for the severity of suicidal risk at the time (ÎČ=0.311; P=.001), as well as being a predictor for fear (ÎČ=–0.009; P=.01) and emotional valence (ÎČ=0.007; P=.009). The model explained 75% of the variance. Conclusions:These findings suggest that it is possible to identify emotional content and other risk factors in suicidal tweets with a Spanish sample. Emotional analysis and, in particular, the detection of emotional variations may be key for real-time suicide prevention through social media

    J Adolesc Health

    Get PDF
    Purpose:Rates of suicide are increasing rapidly among youth. Social media messages and online games promoting suicide are a concern for parents and clinicians. We examined the timing and location of social media posts about one alleged youth suicide game to better understand the degree to which social media data can provide earlier public health awareness.Methods:We conducted a search of all public social media posts and news articles on the Blue Whale Challenge (BWC), an alleged suicide game, from January 1, 2013, through June 30, 2017. Data were retrieved through multiple keyword search; sources included social media platforms Twitter, YouTube, Reddit, Tumblr, as well as blogs, forums, and news articles. Posts were classified into three categories: individual \u201cpro\u201d-BWC posts (support for game), individual \u201canti\u201d-BWC posts (opposition to game), and media reports. Timing and location of posts were assessed.Results:Overall, 95,555 social media posts and articles about the BWC were collected. In total, over one-quarter (28.3%) were \u201cpro\u201d-BWC. The first U.S. news article related to the BWC was published approximately 4 months after the first English language U.S. social media post about the BWC and 9 months after the first U.S. social media post in any language. By the close of the study period, \u201cpro\u201d-BWC posts had spread to 127 countries.Conclusions:Novel online risks to mental health, such as prosuicide games or messages, can spread rapidly and globally. Better understanding social media and Web data may allow for detection of such threats earlier than is currently possible.CC999999/ImCDC/Intramural CDC HHS/United States2020-07-01T00:00:00Z30819581PMC71646767915vault:3530

    Characterization of Time-variant and Time-invariant Assessment of Suicidality on Reddit using C-SSRS

    Get PDF
    Suicide is the 10th leading cause of death in the U.S (1999-2019). However, predicting when someone will attempt suicide has been nearly impossible. In the modern world, many individuals suffering from mental illness seek emotional support and advice on well-known and easily-accessible social media platforms such as Reddit. While prior artificial intelligence research has demonstrated the ability to extract valuable information from social media on suicidal thoughts and behaviors, these efforts have not considered both severity and temporality of risk. The insights made possible by access to such data have enormous clinical potential - most dramatically envisioned as a trigger to employ timely and targeted interventions (i.e., voluntary and involuntary psychiatric hospitalization) to save lives. In this work, we address this knowledge gap by developing deep learning algorithms to assess suicide risk in terms of severity and temporality from Reddit data based on the Columbia Suicide Severity Rating Scale (C-SSRS). In particular, we employ two deep learning approaches: time-variant and time-invariant modeling, for user-level suicide risk assessment, and evaluate their performance against a clinician-adjudicated gold standard Reddit corpus annotated based on the C-SSRS. Our results suggest that the time-variant approach outperforms the time-invariant method in the assessment of suicide-related ideations and supportive behaviors (AUC:0.78), while the time-invariant model performed better in predicting suicide-related behaviors and suicide attempt (AUC:0.64). The proposed approach can be integrated with clinical diagnostic interviews for improving suicide risk assessments.Comment: 24 Pages, 8 Tables, 6 Figures; Accepted by PLoS One ; One of the two mentioned Datasets in the manuscript has Closed Access. We will make it public after PLoS One produces the manuscrip
    • 

    corecore