7,118 research outputs found

    School Based Responses to Non-Suicidal Self Injury and Suicide: Literature Considerations When Framing a Policy Response

    Get PDF
    Deliberate Non-Suicidal Self Injury (NSSI) and suicide present distinct but related concerns for schools. An Australian study of over 6,300 families containing children/ adolescents aged 4 to 17 years found that one in 10 young people had engaged with NSSI – with three quarters of this cohort having harmed themselves in the previous twelve months (Lawrence, 2015). The same study found that within the 12 to 17 year old age group, one in 13 individuals had considered suicide in the previous 12 months, with one in 40 having made attempts (Lawrence, 2015). This article seeks to articulate key themes from literature that demand consideration by schools seeking to construct their own framework or pastoral response, balancing the prioritization of student safety whilst also attending to the realities of staff competencies. Given the age group presented in the Lawrence (2015) study, it should not be surprising that adolescents in the school context may disclosure the presence of intrusive thoughts pertaining to at-risk behaviours. Consequently, schools are well placed to deliver prevention services and simultaneously, need to be prepared to respond to situations of NSSI and suicide attempts. Drawing on the expertise of staff from an Edmund Rice Education Australia (EREA) school located in Brisbane, this paper draws links to existing policy determinants of pastoral care from within this Catholic school, whilst considering the issue of risk-to-self with relevant themes organized according to the three action areas outlined by the Queensland Suicide Action Prevention Plan (Queensland Mental Health Commission, 2015) namely: prevention; intervention; and postvention

    Combining mobile-health (mHealth) and artificial intelligence (AI) methods to avoid suicide attempts: the Smartcrises study protocol

    Get PDF
    The screening of digital footprint for clinical purposes relies on the capacity of wearable technologies to collect data and extract relevant information’s for patient management. Artificial intelligence (AI) techniques allow processing of real-time observational information and continuously learning from data to build understanding. We designed a system able to get clinical sense from digital footprints based on the smartphone’s native sensors and advanced machine learning and signal processing techniques in order to identify suicide risk. Method/design: The Smartcrisis study is a cross-national comparative study. The study goal is to determine the relationship between suicide risk and changes in sleep quality and disturbed appetite. Outpatients from the Hospital FundaciĂłn JimĂ©nez DĂ­az Psychiatry Department (Madrid, Spain) and the University Hospital of Nimes (France) will be proposed to participate to the study. Two smartphone applications and a wearable armband will be used to capture the data. In the intervention group, a smartphone application (MEmind) will allow for the ecological momentary assessment (EMA) data capture related with sleep, appetite and suicide ideations. Discussion: Some concerns regarding data security might be raised. Our system complies with the highest level of security regarding patients’ data. Several important ethical considerations related to EMA method must also be considered. EMA methods entails a non-negligible time commitment on behalf of the participants. EMA rely on daily, or sometimes more frequent, Smartphone notifications. Furthermore, recording participants’ daily experiences in a continuous manner is an integral part of EMA. This approach may be significantly more than asking a participant to complete a retrospective questionnaire but also more accurate in terms of symptoms monitoring. Overall, we believe that Smartcrises could participate to a paradigm shift from the traditional identification of risks factors to personalized prevention strategies tailored to characteristics for each patientThis study was partly funded by FundaciĂłn JimĂ©nez DĂ­az Hospital, Instituto de Salud Carlos III (PI16/01852), DelegaciĂłn del Gobierno para el Plan Nacional de Drogas (20151073), American Foundation for Suicide Prevention (AFSP) (LSRG-1-005-16), the Madrid Regional Government (B2017/BMD-3740 AGES-CM 2CM; Y2018/TCS-4705 PRACTICO-CM) and Structural Funds of the European Union. MINECO/FEDER (‘ADVENTURE’, id. TEC2015–69868-C2–1-R) and MCIU Explora Grant ‘aMBITION’ (id. TEC2017–92552-EXP), the French Embassy in Madrid, Spain, The foundation de l’avenir, and the Fondation de France. The work of D. RamĂ­rez and A. ArtĂ©s-RodrĂ­guez has been partly supported by Ministerio de EconomĂ­a of Spain under projects: OTOSIS (TEC2013–41718-R), AID (TEC2014–62194-EXP) and the COMONSENS Network (TEC2015–69648-REDC), by the Ministerio de EconomĂ­a of Spain jointly with the European Commission (ERDF) under projects ADVENTURE (TEC2015– 69868-C2–1-R) and CAIMAN (TEC2017–86921-C2–2-R), and by the Comunidad de Madrid under project CASI-CAM-CM (S2013/ICE-2845). The work of P. Moreno-Muñoz has been supported by FPI grant BES-2016-07762

    Characteristics of Multi-Class Suicide Risks Tweets Through Feature Extraction and Machine Learning Techniques

    Get PDF
    This paper presents a detailed analysis of the linguistic characteristics connected to specific levels of suicide risks, providing insight into the impact of the feature extraction techniques on the effectiveness of the predictive models of suicide ideation. Prevalent initiatives of research works had been observed in the detection of suicide ideation from social media posts through feature extraction and machine learning techniques but scarcely on the multiclass classification of suicide risks and analysis of linguistic characteristics' impact on predictability. To address this issue, this paper proposes the implementation of a machine learning framework that is capable of analyzing multiclass classification of suicide risks from social media posts with extended analysis of linguistic characteristics that contribute to suicide risk detection. A total of 552 samples of a supervised dataset of Twitter posts were manually annotated for suicide risk modeling. Feature extraction was done through a combination of feature extraction techniques of term frequency-inverse document frequency (TF-IDF), Part-of-Speech (PoS) tagging, and valence-aware dictionary for sentiment reasoning (VADER). Data training and modeling were conducted through the Random Forest technique. Testing of 138 samples with scenarios of detections in real-time data for the performance evaluation yielded 86.23% accuracy, 86.71% precision, and 86.23% recall, an improved result with a combination of feature extraction techniques rather than data modeling techniques. An extended analysis of linguistic characteristics showed that a sentence's context is the main contributor to suicide risk classification accuracy, while grammatical tags and strong conclusive terms were not

    Low-resource suicide ideation and depression detection with multitask learning and large language models

    Full text link
    Nous Ă©valuons des mĂ©thodes de traitement automatique du langage naturel (TALN) pour la dĂ©tection d’idĂ©es suicidaires, de la dĂ©pression et de l’anxiĂ©tĂ© Ă  partir de publications sur les mĂ©dias sociaux. Comme les ensembles de donnĂ©es relatifs Ă  la santĂ© mentale sont rares et gĂ©nĂ©ralement de petite taille, les mĂ©thodes classiques d’apprentissage automatique ont traditionnellement Ă©tĂ© utilisĂ©es dans ce domaine. Nous Ă©valuons l’effet de l’apprentissage multi-tĂąche sur la dĂ©tection d’idĂ©es suicidaires en utilisant comme tĂąches auxiliaires des ensembles de donnĂ©es disponibles publiquement pour la dĂ©tection de la dĂ©pression et de l’anxiĂ©tĂ©, ainsi que la classification d’émotions et du stress. Nous constatons une hausse de la performance de classification pour les tĂąches de dĂ©tection d’idĂ©es suicidaires, de la dĂ©pression et de l’anxiĂ©tĂ© lorsqu’elles sont entraĂźnĂ©es ensemble en raison de similitudes entre les troubles de santĂ© mentale Ă  l’étude. Nous observons que l’utilisation d’ensembles de donnĂ©es publiquement accessibles pour des tĂąches connexes peut bĂ©nĂ©ficier Ă  la dĂ©tection de problĂšmes de santĂ© mentale. Nous Ă©valuons enfin la performance des modĂšles ChatGPT et GPT-4 dans des scĂ©narios d’apprentissage zero-shot et few-shot. GPT-4 surpasse toutes les autres mĂ©thodes testĂ©es pour la dĂ©tection d’idĂ©es suicidaires. De plus, nous observons que ChatGPT bĂ©nĂ©ficie davantage de l’apprentissage few-shot, car le modĂšle fournit un haut taux de rĂ©ponses non concluantes si aucun exemple n’est prĂ©sentĂ©. Enfin, une analyse des faux nĂ©gatifs produits par GPT-4 pour la dĂ©tection d’idĂ©es suicidaires conclut qu’ils sont dus Ă  des erreurs d’étiquetage plutĂŽt qu’à des lacunes du modĂšle.In this work we explore natural language processing (NLP) methods to suicide ideation, depression, and anxiety detection in social media posts. Since annotated mental health data is scarce and difficult to come by, classical machine learning methods have traditionally been employed on this type of task due to the small size of the datasets. We evaluate the effect of multi-task learning on suicide ideation detection using publicly-available datasets for depression, anxiety, emotion and stress classification as auxiliary tasks. We find that classification performance of suicide ideation, depression, and anxiety is improved when trained together because of the proximity between the mental disorders. We observe that publicly-available datasets for closely-related tasks can benefit the detection of certain mental health conditions. We then perform classification experiments using ChatGPT and GPT-4 using zero-shot and few-shot learning, and find that GPT-4 obtains the best performance of all methods tested for suicide ideation detection. We further observe that ChatGPT benefits the most from few-shot learning as it struggles to give conclusive answers when no examples are provided. Finally, an analysis of false negative results for suicide ideation output by GPT-4 concludes that they are due to labeling errors rather than mistakes from the model

    Depression and Self-Harm Risk Assessment in Online Forums

    Full text link
    Users suffering from mental health conditions often turn to online resources for support, including specialized online support communities or general communities such as Twitter and Reddit. In this work, we present a neural framework for supporting and studying users in both types of communities. We propose methods for identifying posts in support communities that may indicate a risk of self-harm, and demonstrate that our approach outperforms strong previously proposed methods for identifying such posts. Self-harm is closely related to depression, which makes identifying depressed users on general forums a crucial related task. We introduce a large-scale general forum dataset ("RSDD") consisting of users with self-reported depression diagnoses matched with control users. We show how our method can be applied to effectively identify depressed users from their use of language alone. We demonstrate that our method outperforms strong baselines on this general forum dataset.Comment: Expanded version of EMNLP17 paper. Added sections 6.1, 6.2, 6.4, FastText baseline, and CNN-

    Toward Suicidal Ideation Detection with Lexical Network Features and Machine Learning

    Get PDF
    In this study, we introduce a new network feature for detecting suicidal ideation from clinical texts and conduct various additional experiments to enrich the state of knowledge. We evaluate statistical features with and without stopwords, use lexical networks for feature extraction and classification, and compare the results with standard machine learning methods using a logistic classifier, a neural network, and a deep learning method. We utilize three text collections. The first two contain transcriptions of interviews conducted by experts with suicidal (n=161 patients that experienced severe ideation) and control subjects (n=153). The third collection consists of interviews conducted by experts with epilepsy patients, with a few of them admitting to experiencing suicidal ideation in the past (32 suicidal and 77 control). The selected methods detect suicidal ideation with an average area under the curve (AUC) score of 95% on the merged collection with high suicidal ideation, and the trained models generalize over the third collection with an average AUC score of 69%. Results reveal that lexical networks are promising for classification and feature extraction as successful as the deep learning model. We also observe that a logistic classifier’s performance was comparable with the deep learning method while promising explainability

    Regulating Mobile Mental Health Apps

    Get PDF
    Mobile medical apps (MMAs) are a fast‐growing category of software typically installed on personal smartphones and wearable devices. A subset of MMAs are aimed at helping consumers identify mental states and/or mental illnesses. Although this is a fledgling domain, there are already enough extant mental health MMAs both to suggest a typology and to detail some of the regulatory issues they pose. As to the former, the current generation of apps includes those that facilitate self‐assessment or self‐help, connect patients with online support groups, connect patients with therapists, or predict mental health issues. Regulatory concerns with these apps include their quality, safety, and data protection. Unfortunately, the regulatory frameworks that apply have failed to provide coherent risk‐assessment models. As a result, prudent providers will need to progress with caution when it comes to recommending apps to patients or relying on app‐generated data to guide treatment

    Detecting Mental Distresses Using Social Behavior Analysis in the Context of COVID-19: A Survey

    Get PDF
    Online social media provides a channel for monitoring people\u27s social behaviors from which to infer and detect their mental distresses. During the COVID-19 pandemic, online social networks were increasingly used to express opinions, views, and moods due to the restrictions on physical activities and in-person meetings, leading to a significant amount of diverse user-generated social media content. This offers a unique opportunity to examine how COVID-19 changed global behaviors regarding its ramifications on mental well-being. In this article, we surveyed the literature on social media analysis for the detection of mental distress, with a special emphasis on the studies published since the COVID-19 outbreak. We analyze relevant research and its characteristics and propose new approaches to organizing the large amount of studies arising from this emerging research area, thus drawing new views, insights, and knowledge for interested communities. Specifically, we first classify the studies in terms of feature extraction types, language usage patterns, aesthetic preferences, and online behaviors. We then explored various methods (including machine learning and deep learning techniques) for detecting mental health problems. Building upon the in-depth review, we present our findings and discuss future research directions and niche areas in detecting mental health problems using social media data. We also elaborate on the challenges of this fast-growing research area, such as technical issues in deploying such systems at scale as well as privacy and ethical concerns
    • 

    corecore