5 research outputs found

    Impact Of Content Features For Automatic Online Abuse Detection

    Full text link
    Online communities have gained considerable importance in recent years due to the increasing number of people connected to the Internet. Moderating user content in online communities is mainly performed manually, and reducing the workload through automatic methods is of great financial interest for community maintainers. Often, the industry uses basic approaches such as bad words filtering and regular expression matching to assist the moderators. In this article, we consider the task of automatically determining if a message is abusive. This task is complex since messages are written in a non-standardized way, including spelling errors, abbreviations, community-specific codes... First, we evaluate the system that we propose using standard features of online messages. Then, we evaluate the impact of the addition of pre-processing strategies, as well as original specific features developed for the community of an online in-browser strategy game. We finally propose to analyze the usefulness of this wide range of features using feature selection. This work can lead to two possible applications: 1) automatically flag potentially abusive messages to draw the moderator's attention on a narrow subset of messages ; and 2) fully automate the moderation process by deciding whether a message is abusive without any human intervention

    Detection of abusive messages in an on-line community

    No full text
    International audienceModerating user content in online communities is mainly performed manually, and reducing the workload through automatic methods is of great interest. The industry mainly uses basic approaches such as bad words filtering. In this article, we consider the task of automatically determining whether a message is abusive or not. This task is complex, because messages are written in a non-standardized natural language. We propose an original automatic moderation method applied to French, which is based on both traditional tools and a newly proposed context-based feature relying on the modeling of user behavior when reacting to a message. The results obtained during this preliminary study show the potential of the proposed method, in a context of automatic processing or decision support.La modération du contenu posté par les utilisateurs de communautés en ligne est ma-joritairement effectuée manuellement. De par la taille des données à traiter, les méthodes au-tomatiques ont un intérêt certain pour réduire la charge de travail. Actuellement, l'industrie utilise des approches basiques à base de recherche de mots, comme par exemple le filtrage des messages contenant certains mots interdits. Nous nous intéressons dans cet article à une tâche de classification permettant de déterminer si un message est abusif ou non. Ceci est com-plexe, car les messages sont écrits dans un langage naturel non standardisé. Nous proposons ici une approche originale de modération automatique appliquée au français, s'appuyant à la fois sur des outils classiques et un nouveau descripteur fondé sur la modélisation du comporte-ment utilisateur face à un message abusif. Les résultats obtenus lors de cette étude préliminaire montrent le potentiel de notre méthode, pour l'alerte automatique ou le support à la décisio
    corecore