3 research outputs found

    Verifying information with multimedia content on twitter: A comparative study of automated approaches

    Get PDF
    An increasing amount of posts on social media are used for disseminating news information and are accompanied by multimedia content. Suchcontent may often be misleading or be digitally manipulated. More often thannot, such pieces of content reach the front pages of major news outlets, havinga detrimental effect on their credibility. To avoid such effects, there is profoundneed for automated methods that can help debunk and verify online contentin very short time. To this end, we present a comparative study of three suchmethods that are catered for Twitter, a major social media platform used fornews sharing. Those include: a) a method that uses textual patterns to extractAn increasing amount of posts on social media are used for disseminating news information and are accompanied by multimedia content. Such content may often be misleading or be digitally manipulated. More often than not, such pieces of content reach the front pages of major news outlets, having a detrimental effect on their credibility. To avoid such effects, there is profound need for automated methods that can help debunk and verify online content in very short time. To this end, we present a comparative study of three such methods that are catered for Twitter, a major social media platform used for news sharing. Those include: a) a method that uses textual patterns to extract claims about whether a tweet is fake or real and attribution statements about the source of the content; b) a method that exploits the information that same-topic tweets should be also similar in terms of credibility; and c) a method that uses a semi-supervised learning scheme that leverages the decisions of two independent credibility classifiers. We perform a comprehensive comparative evaluation of these approaches on datasets released by the Verifying Multimedia Use (VMU) task organized in the context of the 2015 and 2016 MediaEval benchmark. In addition to comparatively evaluating the three presented methods, we devise and evaluate a combined method based on their outputs, which outperforms all three of them. We discuss these findings and provide insights to guide future generations of verification tools for media professionals.<br/

    Verifying information with multimedia content on twitter: A comparative study of automated approaches

    No full text
    An increasing amount of posts on social media are used for disseminating news information and are accompanied by multimedia content. Suchcontent may often be misleading or be digitally manipulated. More often thannot, such pieces of content reach the front pages of major news outlets, havinga detrimental effect on their credibility. To avoid such effects, there is profoundneed for automated methods that can help debunk and verify online contentin very short time. To this end, we present a comparative study of three suchmethods that are catered for Twitter, a major social media platform used fornews sharing. Those include: a) a method that uses textual patterns to extractAn increasing amount of posts on social media are used for disseminating news information and are accompanied by multimedia content. Such content may often be misleading or be digitally manipulated. More often than not, such pieces of content reach the front pages of major news outlets, having a detrimental effect on their credibility. To avoid such effects, there is profound need for automated methods that can help debunk and verify online content in very short time. To this end, we present a comparative study of three such methods that are catered for Twitter, a major social media platform used for news sharing. Those include: a) a method that uses textual patterns to extract claims about whether a tweet is fake or real and attribution statements about the source of the content; b) a method that exploits the information that same-topic tweets should be also similar in terms of credibility; and c) a method that uses a semi-supervised learning scheme that leverages the decisions of two independent credibility classifiers. We perform a comprehensive comparative evaluation of these approaches on datasets released by the Verifying Multimedia Use (VMU) task organized in the context of the 2015 and 2016 MediaEval benchmark. In addition to comparatively evaluating the three presented methods, we devise and evaluate a combined method based on their outputs, which outperforms all three of them. We discuss these findings and provide insights to guide future generations of verification tools for media professionals.<br/
    corecore