1 research outputs found

    Warning Signs in Communicating the Machine Learning Detection Results of Misinformation with Individuals

    Full text link
    With the prevalence of misinformation online, researchers have focused on developing various machine learning algorithms to detect fake news. However, users' perception of machine learning outcomes and related behaviors have been widely ignored. Hence, this paper proposed to bridge this gap by studying how to pass the detection results of machine learning to the users, and aid their decisions in handling misinformation. An online experiment was conducted, to evaluate the effect of the proposed machine learning warning sign against a control condition. We examined participants' detection and sharing of news. The data showed that warning sign's effects on participants' trust toward the fake news were not significant. However, we found that people's uncertainty about the authenticity of the news dropped with the presence of the machine learning warning sign. We also found that social media experience had effects on users' trust toward the fake news, and age and social media experience had effects on users' sharing decision. Therefore, the results indicate that there are many factors worth studying that affect people's trust in the news. Moreover, the warning sign in communicating machine learning detection results is different from ordinary warnings and needs more detailed research and design. These findings hold important implications for the design of machine learning warnings
    corecore