2 research outputs found

    Semantic-Based Classification of Toxic Comments Using Ensemble Learning

    Get PDF
    A social media is rapidly expanding, and its anonymity feature completely supports free speech. Hate speech directed at anyone or any group because of their ethnicity, clan, religion, national or cultural their heritage, sex, disability, gender orientation, or other characteristics is a violation of their authority. Seriously encourages violence or hate crimes and causes social unrest by undermining peace, trustworthiness, and human rights, among other things. Identifying toxic remarks in social media conversation is a critical but difficult job. There are several difficulties in detecting toxic text remarks using a suitable and particular social media dataset and its high-performance, selected classifier. People nowadays share messages not only in person, but also in online settings such as social networking sites and online groups. As a result, all social media sites and apps, as well as all current communities in the digital world, require an identification and prevention system. Finding toxic social media remarks has proven critical for content screening. The identifying blocker in such a system would need to notice any bad online behavior and alert the prophylactic blocker to take appropriate action. The purpose of this research was to assess each text and find various kinds of toxicities such as profanity, threats, name-calling, and identity-based hatred. Jigsaw's designed Wikipedia remark collection is used for this

    Semantic-Based Classification of Toxic Comments Using Ensemble Learning

    No full text
    A social media is rapidly expanding, and its anonymity feature completely supports free speech. Hate speech directed at anyone or any group because of their ethnicity, clan, religion, national or cultural their heritage, sex, disability, gender orientation, or other characteristics is a violation of their authority. Seriously encourages violence or hate crimes and causes social unrest by undermining peace, trustworthiness, and human rights, among other things. Identifying toxic remarks in social media conversation is a critical but difficult job. There are several difficulties in detecting toxic text remarks using a suitable and particular social media dataset and its high-performance, selected classifier. People nowadays share messages not only in person, but also in online settings such as social networking sites and online groups. As a result, all social media sites and apps, as well as all current communities in the digital world, require an identification and prevention system. Finding toxic social media remarks has proven critical for content screening. The identifying blocker in such a system would need to notice any bad online behavior and alert the prophylactic blocker to take appropriate action. The purpose of this research was to assess each text and find various kinds of toxicities such as profanity, threats, name-calling, and identity-based hatred. Jigsaw's designed Wikipedia remark collection is used for this
    corecore