874 research outputs found

    Analysing the connectivity and communication of suicidal users on Twitter

    Get PDF
    In this paper we aim to understand the connectivity and communication characteristics of Twitter users who post content subsequently classified by human annotators as containing possible suicidal intent or thinking, commonly referred to as suicidal ideation. We achieve this understanding by analysing the characteristics of their social networks. Starting from a set of human annotated Tweets we retrieved the authors’ followers and friends lists, and identified users who retweeted the suicidal content. We subsequently built the social network graphs. Our results show a high degree of reciprocal connectivity between the authors of suicidal content when compared to other studies of Twitter users, suggesting a tightly-coupled virtual community. In addition, an analysis of the retweet graph has identified bridge nodes and hub nodes connecting users posting suicidal ideation with users who were not, thus suggesting a potential for information cascade and risk of a possible contagion effect. This is particularly emphasised by considering the combined graph merging friendship and retweeting links

    Towards the design of a platform for abuse detection in OSNs using multimedial data analysis

    Get PDF
    Online social networks (OSNs) are becoming increasingly popular every day. The vast amount of data created by users and their actions yields interesting opportunities, both socially and economically. Unfortunately, these online communities are prone to abuse and inappropriate behaviour such as cyber bullying. For victims, this kind of behaviour can lead to depression and other severe problems. However, due to the huge amount of users and data it is impossible to manually check all content posted on the social network. We propose a pluggable architecture with reusable components, able to quickly detect harmful content. The platform uses text-, image-, audio- and video-based analysis modules to detect inappropriate content or high risk behaviour. Domain services aggregate this data and flag user profiles if necessary. Social network moderators need only check the validity of the flagged profiles. This paper reports upon key requirements of the platform, the architectural components and important challenges

    Multi-class machine classification of suicide-related communication on Twitter

    Get PDF
    The World Wide Web, and online social networks in particular, have increased connectivity between people such that information can spread to millions of people in a matter of minutes. This form of online collective contagion has provided many benefits to society, such as providing reassurance and emergency management in the immediate aftermath of natural disasters. However, it also poses a potential risk to vulnerable Web users who receive this information and could subsequently come to harm. One example of this would be the spread of suicidal ideation in online social networks, about which concerns have been raised. In this paper we report the results of a number of machine classifiers built with the aim of classifying text relating to suicide on Twitter. The classifier distinguishes between the more worrying content, such as suicidal ideation, and other suicide-related topics such as reporting of a suicide, memorial, campaigning and support. It also aims to identify flippant references to suicide. We built a set of baseline classifiers using lexical, structural, emotive and psychological features extracted from Twitter posts. We then improved on the baseline classifiers by building an ensemble classifier using the Rotation Forest algorithm and a Maximum Probability voting classification decision method, based on the outcome of base classifiers. This achieved an F-measure of 0.728 overall (for 7 classes, including suicidal ideation) and 0.69 for the suicidal ideation class. We summarise the results by reflecting on the most significant predictive principle components of the suicidal ideation class to provide insight into the language used on Twitter to express suicidal ideation. Finally, we perform a 12-month case study of suicide-related posts where we further evaluate the classification approach - showing a sustained classification performance and providing anonymous insights into the trends and demographic profile of Twitter users posting content of this type

    Detection of Suicidal Ideation on Twitter using Machine Learning & Ensemble Approaches

    Get PDF
    يعد التفكير في الانتحار من أخطر مشكلات الصحة العقلية التي يواجهها الناس في جميع أنحاء العالم. هناك عوامل خطر مختلفة يمكن أن تؤدي إلى الانتحار. من أكثر عوامل الخطر شيوعًا وأكثرها خطورة الاكتئاب والقلق والعزلة الاجتماعية واليأس. يمكن أن يساعد الاكتشاف المبكر لعوامل الخطر هذه في منع أو تقليل عدد حالات الانتحار. أصبحت منصات الشبكات الاجتماعية عبر الإنترنت مثل تويتر وريدت وفيس بوك طريقة جديدة للناس للتعبير عن أنفسهم بحرية دون القلق بشأن الوصمة الاجتماعية. تقدم هذه الورقة منهجية وتجربة باستخدام وسائل التواصل الاجتماعي كأداة لتحليل الأفكار الانتحارية بطريقة أفضل ، وبالتالي المساعدة في منع فرص الوقوع ضحية لهذا الاضطراب العقلي المؤسف. نجمع البيانات ذات الصلة عبر توترأحد مواقع الشبكات الاجتماعية الشهيرة (SNS) . ومن ثم تتم معالجة التغريدات يدويًا وإضافة تعليقات توضيحية لها يدويًا. وأخيرًا ، يتم استخدام أساليب التعلم الآلي المختلفة والمجموعات لتمييز التغريدات الانتحارية وغير الانتحارية تلقائيًا. ستساعد هذه الدراسة التجريبية الباحثين على معرفة وفهم كيفية استخدام الأشخاص للتعبير عن النفس في التعبير عن مشاعرهم وعواطفهم. وأكدت الدراسة أيضًا أنه من الممكن تحليل وتمييز هذه التغريدات باستخدام التشفير البشري ثم تكرار الدقة حسب تصنيف الماكينة. ومع ذلك ، فإن قوة التنبؤ للكشف عن الانتحار الحقيقي لم يتم تأكيدها بعد ، وهذه الدراسة لا تتواصل بشكل مباشر وتتدخل مع الأشخاص الذين لديهم سلوك انتحاري..Suicidal ideation is one of the most severe mental health issues faced by people all over the world. There are various risk factors involved that can lead to suicide. The most common & critical risk factors among them are depression, anxiety, social isolation and hopelessness. Early detection of these risk factors can help in preventing or reducing the number of suicides. Online social networking platforms like Twitter, Redditt and Facebook are becoming a new way for the people to express themselves freely without worrying about social stigma. This paper presents a methodology and experimentation using social media as a tool to analyse the suicidal ideation in a better way, thus helping in preventing the chances of being the victim of this unfortunate mental disorder. The data is collected from Twitter, one of the popular Social Networking Sites (SNS). The Tweets are then pre-processed and annotated manually. Finally, various machine learning and ensemble methods are used to automatically distinguish Suicidal and Non-Suicidal tweets. This experimental study will help the researchers to know and understand how SNS are used by the people to express their distress related feelings and emotions. The study further confirmed that it is possible to analyse and differentiate these tweets using human coding and then replicate the accuracy by machine classification. However, the power of prediction for detecting genuine suicidality is not confirmed yet, and this study does not directly communicate and intervene the people having suicidal behaviour

    An Automated Tool to Detect Suicidal Susceptibility from Social Media Posts

    Full text link
    According to the World Health Organization (WHO), approximately 1.4 million individuals died by suicide in 2022. This means that one person dies by suicide every 20 seconds. Globally, suicide ranks as the 10th leading cause of death, while it ranks second for young people aged 15-29. In the year 2022, it was estimated that about 10.5 million suicide attempts occurred. The WHO suggests that alongside each completed suicide, there are many individuals who make attempts. Today, social media is a place where people share their feelings, such as happiness, sadness, anger, and love. This helps us understand how they are thinking or what they might do. This study takes advantage of this opportunity and focuses on developing an automated tool to find if someone may be thinking about harming themselves. It is developed based on the Suicidal-Electra model. We collected datasets of social media posts, processed them, and used them to train and fine-tune the model. Upon evaluating the refined model with a testing dataset, we consistently observed outstanding results. The model demonstrated an impressive accuracy rate of 93% and a commendable F1 score of 0.93. Additionally, we developed an API enabling seamless integration with third-party platforms, enhancing its potential for implementation to address the growing concern of rising suicide rates.Comment: 8 pages, 10 figures, 1 table. Submitted to Peer

    Detecting Suicidality in Arabic Tweets Using Machine Learning and Deep Learning Techniques

    Full text link
    Social media platforms have revolutionized traditional communication techniques by enabling people globally to connect instantaneously, openly, and frequently. People use social media to share personal stories and express their opinion. Negative emotions such as thoughts of death, self-harm, and hardship are commonly expressed on social media, particularly among younger generations. As a result, using social media to detect suicidal thoughts will help provide proper intervention that will ultimately deter others from self-harm and committing suicide and stop the spread of suicidal ideation on social media. To investigate the ability to detect suicidal thoughts in Arabic tweets automatically, we developed a novel Arabic suicidal tweets dataset, examined several machine learning models, including Na\"ive Bayes, Support Vector Machine, K-Nearest Neighbor, Random Forest, and XGBoost, trained on word frequency and word embedding features, and investigated the ability of pre-trained deep learning models, AraBert, AraELECTRA, and AraGPT2, to identify suicidal thoughts in Arabic tweets. The results indicate that SVM and RF models trained on character n-gram features provided the best performance in the machine learning models, with 86% accuracy and an F1 score of 79%. The results of the deep learning models show that AraBert model outperforms other machine and deep learning models, achieving an accuracy of 91\% and an F1-score of 88%, which significantly improves the detection of suicidal ideation in the Arabic tweets dataset. To the best of our knowledge, this is the first study to develop an Arabic suicidality detection dataset from Twitter and to use deep-learning approaches in detecting suicidality in Arabic posts

    Crisis

    Get PDF
    Background:The dissemination of positive messages about mental health is a key goal of organizations and individuals. Aims: Our aim was to examine factors that predict increased dissemination of such messages.Method:We analyzed 10,998 positive messages authored on Twitter and studied factors associated with messages that are shared (re-tweeted) using logistic regression. Characteristics of the account, message, linguistic style, sentiment, and topic were examined.Results:Less than one third of positive messages (31.7%) were shared at least once. In adjusted models, accounts that posted a greater number of messages were less likely to have any single message shared. Messages about military-related topics were 60% more likely to be shared (adjusted odds ratio [AOR] = 1.6, 95% CI [1.1, 2.1]) as well as messages containing achievement-related keywords (AOR = 1.6, 95% CI [1.3, 1.9]). Conversely, positive messages explicitly addressing eating/food, appearance, and sad affective states were less likely to be shared. Multiple other message characteristics influenced sharing.Limitations:Only messages on a single platform and over a focused period of time were analyzed.Conclusion:A knowledge of factors affecting dissemination of positive mental health messages may aid organizations and individuals seeking to promote such messages online.CC999999/ImCDC/Intramural CDC HHS/United States2021-03-01T00:00:00Z31066310PMC72173487696vault:3546

    Disrupting networks of hate: Characterising hateful networks and removing critical nodes

    Get PDF
    Hateful individuals and groups have increasingly been using the Internet to express their ideas, spread their beliefs, and recruit new members. Under- standing the network characteristics of these hateful groups could help understand individuals’ exposure to hate and derive intervention strategies to mitigate the dangers of such networks by disrupting communications. This article analyses two hateful followers net- works and three hateful retweet networks of Twitter users who post content subsequently classified by hu- man annotators as containing hateful content. Our analysis shows similar connectivity characteristics between the hateful followers networks and likewise between the hateful retweet networks. The study shows that the hateful networks exhibit higher connectivity characteristics when compared to other ”risky” networks, which can be seen as a risk in terms of the likelihood of expo- sure to, and propagation of, online hate. Three network performance metrics are used to quantify the hateful content exposure and contagion: giant component (GC) size, density and average shortest path. In order to efficiently identify nodes whose removal reduced the flow of hate in a network, we propose a range of structured node-removal strategies and test their effectiveness. Results show that removing users with a high degree is most effective in reducing the hateful followers network connectivity (GC, size and density), and therefore reducing the risk of exposure to cyberhate and stemming its propagation
    corecore