5 research outputs found

    Deep Random Forest and AraBert for Hate Speech Detection from Arabic Tweets

    Get PDF
    Nowadays, hate speech detection from Arabic tweets attracts the attention of many researchers. Numerous systems and techniques have been proposed to address this classification challenge. Nonetheless, three major limits persist: the use of deep learning models with an excess of hyperparameters, the reliance on hand-crafted features, and the requirement for a huge amount of training data to achieve satisfactory performance. In this study, we propose Contextual Deep Random Forest (CDRF), a hate speech detection approach that combines contextual embedding and Deep Random Forest. From the experimental findings, the Arabic contextual embedding model proves to be highly effective in hate speech detection, outperforming the static embedding models. Additionally, we prove that the proposed CDRF significantly enhances the performance of Arabic hate speech classification

    Toxic Text in Personas: An Experiment on User Perceptions

    Get PDF
    When algorithms create personas from social media data, the personas can become noxious via automatically including toxic comments. To investigate how users perceive such personas, we conducted a 2 × 2 user experiment with 496 participants that showed participants toxic and non-toxic versions of data-driven personas. We found that participants gave higher credibility, likability, empathy, similarity, and willingness-to-use scores to non-toxic personas. Also, gender affected toxicity perceptions in that female toxic data-driven personas scored lower in likability, empathy, and similarity than their male counterparts. Female participants gave higher perceptions scores to non-toxic personas and lower scores to toxic personas than male participants. We discuss implications from our research for designing data-driven personas

    Toxic text in personas: An experiment on user perceptions

    Get PDF
    When algorithms create personas from social media data, the personas can become noxious via automatically including toxic comments. To investigate how users perceive such personas, we conducted a 2 × 2 user experiment with 496 participants that showed participants toxic and non-toxic versions of data-driven personas. We found that participants gave higher credibility, likability, empathy, similarity, and willingness-to-use scores to non-toxic personas. Also, gender affected toxicity perceptions in that female toxic data-driven personas scored lower in likability, empathy, and similarity than their male counterparts. Female participants gave higher perceptions scores to non-toxic personas and lower scores to toxic personas than male participants. We discuss implications from our research for designing data-driven personas.info:eu-repo/semantics/publishedVersio
    corecore