975 research outputs found

    Cyberbullying: a resource for parents of elementary school-aged children

    Get PDF
    In recent years, the digital revolution, and, in particular, the widespread use of devices with internet access, has enabled a new form of bullying to emerge. The typical age at which youth begin using internet-enabled technology and social media continues to trend downward. Consequently, elementary school-aged children are increasingly subjected to cyberbullying. Therefore, this clinical resource is a text-based pamphlet for parents with elementary-aged children that will serve as an educational tool regarding cyberbullying prevention strategies. The development of the pamphlet included an initial literature review of the literature, gathered through searches from online academic databases, including Psychinfo, PsychArticles, and Google Scholar. Additionally, several websites and assortment of books were reviewed to identify a representative list of parent resources related to cyberbullying. The pamphlet provides empirically supported information about cyberbullying as well as prevention strategies parents can utilize with their children. The target audience for this pamphlet are parents and guardians of elementary school-aged children between the ages of 5 and 10. The following content areas were included in the pamphlet: (a) What is Cyberbullying? (e.g., Cyberbullying definition, types of Cyberbullying, Cyberbully roles, adverse impacts for victims, (b) Prevention strategies (some of which will be illustrated through vignettes), (c) Recognizing/Detecting Cyberbullying (e.g., warning signs, how to talk to your child about cyberbullying and encourage disclosure, (d) What to do next? (e.g., how to intervene at school and/or with the parents of the cyberbully), (e) Resources (e.g., websites, apps, books and community programs focused on cyberbullying). A formal evaluation of the completed text-based pamphlet is beyond the scope of this project

    Media use during adolescence: the recommendations of the Italian Pediatric Society.

    Get PDF
    BACKGROUND: The use of media device, such as smartphone and tablet, is currently increasing, especially among the youngest. Adolescents spend more and more time with their smartphones consulting social media, mainly Facebook, Instagram and Twitter because. Adolescents often feel the necessity to use a media device as a means to construct a social identity and express themselves. For some children, smartphone ownership starts even sooner as young as 7 yrs, according to internet safety experts. MATERIAL AND METHODS: We analyzed the evidence on media use and its consequences in adolescence. RESULTS: In literature, smartphones and tablets use may negatively influences the psychophysical development of the adolescent, such as learning, sleep and sigh. Moreover, obesity, distraction, addiction, cyberbullism and Hikikomori phenomena are described in adolescents who use media device too frequently. The Italian Pediatric Society provide action-oriented recommendations for families and clinicians to avoid negative outcomes. CONCLUSIONS: Both parents and clinicians should be aware of the widespread phenomenon of media device use among adolescents and try to avoid psychophysical consequences on the youngest

    Detecting Aggressiveness in Tweets: A Hybrid Model for Detecting Cyberbullying in the Spanish Language

    Get PDF
    In recent years, the use of social networks has increased exponentially, which has led to a significant increase in cyberbullying. Currently, in the field of Computer Science, research has been made on how to detect aggressiveness in texts, which is a prelude to detecting cyberbullying. In this field, the main work has been done for English language texts, mainly using Machine Learning (ML) approaches, Lexicon approaches to a lesser extent, and very few works using hybrid approaches. In these, Lexicons and Machine Learning algorithms are used, such as counting the number of bad words in a sentence using a Lexicon of bad words, which serves as an input feature for classification algorithms. This research aims at contributing towards detecting aggressiveness in Spanish language texts by creating different models that combine the Lexicons and ML approach. Twenty-two models that combine techniques and algorithms from both approaches are proposed, and for their application, certain hyperparameters are adjusted in the training datasets of the corpora, to obtain the best results in the test datasets. Three Spanish language corpora are used in the evaluation: Chilean, Mexican, and Chilean-Mexican corpora. The results indicate that hybrid models obtain the best results in the 3 corpora, over implemented models that do not use Lexicons. This shows that by mixing approaches, aggressiveness detection improves. Finally, a web application is developed that gives applicability to each model by classifying tweets, allowing evaluating the performance of models with external corpus and receiving feedback on the prediction of each one for future research. In addition, an API is available that can be integrated into technological tools for parental control, online plugins for writing analysis in social networks, and educational tools, among others

    A Systematic Literature Review on Cyberbullying in Social Media: Taxonomy, Detection Approaches, Datasets, And Future Research Directions

    Get PDF
    In the area of Natural Language Processing, sentiment analysis, also called opinion mining, aims to extract human thoughts, beliefs, and perceptions from unstructured texts. In the light of social media's rapid growth and the influx of individual comments, reviews and feedback, it has evolved as an attractive, challenging research area. It is one of the most common problems in social media to find toxic textual content.  Anonymity and concealment of identity are common on the Internet for people coming from a wide range of diversity of cultures and beliefs. Having freedom of speech, anonymity, and inadequate social media regulations make cyber toxic environment and cyberbullying significant issues, which require a system of automatic detection and prevention. As far as this is concerned, diverse research is taking place based on different approaches and languages, but a comprehensive analysis to examine them from all angles is lacking. This systematic literature review is therefore conducted with the aim of surveying the research and studies done to date on classification of  cyberbullying based in textual modality by the research community. It states the definition, , taxonomy, properties, outcome of cyberbullying, roles in cyberbullying  along with other forms of bullying and different offensive behavior in social media. This article also shows the latest popular benchmark datasets on cyberbullying, along with their number of classes (Binary/Multiple), reviewing the state-of-the-art methods to detect cyberbullying and abusive content on social media and discuss the factors that drive offenders to indulge in offensive activity, preventive actions to avoid online toxicity, and various cyber laws in different countries. Finally, we identify and discuss the challenges, solutions, additionally future research directions that serve as a reference to overcome cyberbullying in social media

    Toxicité et sentiment : comment l'étude des sentiments peut aider la détection de toxicité

    Get PDF
    La détection automatique de contenu toxique en ligne est un sujet très important aujourd’hui. Les modérateurs ne peuvent filtrer manuellement tous les messages et les utilisateurs trouvent constamment de nouvelles façons de contourner les filtres automatiques. Dans ce mémoire, j’explore l’impact que peut avoir la détection de sentiment pour améliorer trois points importants de la détection automatique de toxicité : détecter le contenu toxique de façon plus exacte ; rendre les filtres plus difficiles à déjouer et prédire les conversations les plus à risque. Les deux premiers points sont étudiés dans un premier article, où l’intuition principale est qu’il est plus difficile pour un utilisateur malveillant de dissimuler le sentiment d’un message que certains mots-clés à risque. Pour tester cette hypothèse, un outil de détection de sentiment est construit, puis il est utilisé pour mesurer la corrélation entre sentiment et toxicité. Par la suite, les résultats de cet outil sont utilisés comme caractéristiques pour entraîner un modèle de détection de toxicité, et le modèle est testé à la fois dans un contexte classique et un contexte où on simule des altérations aux messages faites par un utilisateur tentant de déjouer un filtre de toxicité. La conclusion de ces tests est que les informations de sentiment aident à la détection de toxicité, particulièrement dans un contexte où les messages sont modifiés. Le troisième point est le sujet d’un second article, qui a comme objectif de valider si les sentiments des premiers messages d’une conversation permettent de prédire si elle va dérailler. Le même outil de détection de sentiments est utilisé, en combinaison avec d’autres caractéristiques trouvées dans de précédents travaux dans le domaine. La conclusion est que les sentiments permettent d’améliorer cette tâche également.Automatic toxicity detection of online content is a major research field nowadays. Moderators cannot filter manually all the messages that are posted everyday and users constantly find new ways to circumvent classic filters. In this master’s thesis, I explore the benefits of sentiment detection for three majors challenges of automatic toxicity detection: standard toxicity detection, making filters harder to circumvent, and predicting conversations at high risk of becoming toxic. The two first challenges are studied in the first article. Our main intuition is that it is harder for a malicious user to hide the toxic sentiment of their message than to change a few toxic keywords. To test this hypothesis, a sentiment detection tool is built and used to measure the correlation between sentiment and toxicity. Next, the sentiment is used as features to train a toxicity detection model, and the model is tested in both a classic and a subversive context. The conclusion of those tests is that sentiment information helps toxicity detection, especially when using subversion. The third challenge is the subject of our second paper. The objective of that paper is to validate if the sentiments of the first messages of a conversation can help predict if it will derail into toxicity. The same sentiment detection tool is used, in addition to other features developed in previous related works. Our results show that sentiment does help improve that task as well

    Automatic Detection of Cyberbullying in Social Media Text

    Get PDF
    While social media offer great communication opportunities, they also increase the vulnerability of young people to threatening situations online. Recent studies report that cyberbullying constitutes a growing problem among youngsters. Successful prevention depends on the adequate detection of potentially harmful messages and the information overload on the Web requires intelligent systems to identify potential risks automatically. The focus of this paper is on automatic cyberbullying detection in social media text by modelling posts written by bullies, victims, and bystanders of online bullying. We describe the collection and fine-grained annotation of a training corpus for English and Dutch and perform a series of binary classification experiments to determine the feasibility of automatic cyberbullying detection. We make use of linear support vector machines exploiting a rich feature set and investigate which information sources contribute the most for this particular task. Experiments on a holdout test set reveal promising results for the detection of cyberbullying-related posts. After optimisation of the hyperparameters, the classifier yields an F1-score of 64% and 61% for English and Dutch respectively, and considerably outperforms baseline systems based on keywords and word unigrams.Comment: 21 pages, 9 tables, under revie

    Artificial Intelligence and Machine Learning in Cybersecurity: Applications, Challenges, and Opportunities for MIS Academics

    Get PDF
    The availability of massive amounts of data, fast computers, and superior machine learning (ML) algorithms has spurred interest in artificial intelligence (AI). It is no surprise, then, that we observe an increase in the application of AI in cybersecurity. Our survey of AI applications in cybersecurity shows most of the present applications are in the areas of malware identification and classification, intrusion detection, and cybercrime prevention. We should, however, be aware that AI-enabled cybersecurity is not without its drawbacks. Challenges to AI solutions include a shortage of good quality data to train machine learning models, the potential for exploits via adversarial AI/ML, and limited human expertise in AI. However, the rewards in terms of increased accuracy of cyberattack predictions, faster response to cyberattacks, and improved cybersecurity make it worthwhile to overcome these challenges. We present a summary of the current research on the application of AI and ML to improve cybersecurity, challenges that need to be overcome, and research opportunities for academics in management information systems

    Study of aggressive behavior on social media

    Get PDF
    Recently, the expression of aggression in social networks has increased a lot, which also causes a lot of adverse effects, such as mental health problems or some other controversies. Hence we perform the first ever user aggressive behavior analysis on Twitter social media official microblogging site, which has no restriction on aggressive behavior. Using the proposed pipeline, we study the user’s aggressive behavior. The pipeline is based on three stages such as data collection, aggression detection, and user profiling. In this study, we detailed analyzed the aggressive behavior of users are depends on their aggressive feeds and events. Further, our analysis revealed that user engagement is higher in aggressive posts
    corecore