88 research outputs found

    Multimodal Fake News Detection with Textual, Visual and Semantic Information

    Full text link
    [EN] Recent years have seen a rapid growth in the number of fake news that are posted online. Fake news detection is very challenging since they are usually created to contain a mixture of false and real information and images that have been manipulated that confuses the readers. In this paper, we propose a multimodal system with the aim to di erentiate between fake and real posts. Our system is based on a neural network and combines textual, visual and semantic information. The textual information is extracted from the content of the post, the visual one from the image that is associated with the post and the semantic refers to the similarity between the image and the text of the post. We conduct our experiments on three standard real world collections and we show the importance of those features on detecting fake news.Anastasia Giachanou is supported by the SNSF Early Postdoc Mobility grant under the project Early Fake News Detection on Social Media, Switzerland (P2TIP2 181441). Guobiao Zhang is funded by China Scholarship Council (CSC) from the Ministry of Education of P.R. China. The work of Paolo Rosso is partially funded by the Spanish MICINN under the research project MISMIS-FAKEnHATE on Misinformation and Miscommunication in social media: FAKE news and HATE speech (PGC2018-096212-B-C31)Giachanou, A.; Zhang, G.; Rosso, P. (2020). Multimodal Fake News Detection with Textual, Visual and Semantic Information. Springer. 30-38. https://doi.org/10.1007/978-3-030-58323-1_3S3038Boididou, C., et al.: Verifying multimedia use at MediaEval 2015. In: MediaEval 2015 Workshop, pp. 235–237 (2015)Castillo, C., Mendoza, M., Poblete, B.: Information credibility on Twitter. In: WWW 2011, pp. 675–684 (2011)Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: CVPR 2017, pp. 1251–1258 (2017)Davidson, T., Warmsley, D., Macy, M., Weber, I.: Automated hate speech detection and the problem of offensive language. In: ICWSM 2017 (2017)Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR 2009, pp. 248–255 (2009)Ghanem, B., Rosso, P., Rangel, F.: An emotional analysis of false information in social media and news articles. ACM Trans. Internet Technol. (TOIT) 20(2), 1–18 (2020)Giachanou, A., Gonzalo, J., Mele, I., Crestani, F.: Sentiment propagation for predicting reputation polarity. In: Jose, J.M., et al. (eds.) ECIR 2017. LNCS, vol. 10193, pp. 226–238. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-56608-5_18Giachanou, A., Ríssola, E.A., Ghanem, B., Crestani, F., Rosso, P.: The role of personality and linguistic patterns in discriminating between fake news spreaders and fact checkers. In: Métais, E., Meziane, F., Horacek, H., Cimiano, P. (eds.) NLDB 2020. LNCS, vol. 12089, pp. 181–192. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51310-8_17Giachanou, A., Rosso, P., Crestani, F.: Leveraging emotional signals for credibility detection. In: SIGIR 2019, pp. 877–880 (2019)He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR 2016, pp. 770–778 (2016)Huang, D., Shan, C., Ardabilian, M., Wang, Y., Chen, L.: Local binary patterns and its application to facial image analysis: a survey. IEEE Trans. Syst. Man Cybern. Part C 41(6), 765–781 (2011)Khattar, D., Goud, J.S., Gupta, M., Varma, V.: MVAE: multimodal variational autoencoder for fake news detection. In: WWW 2019, pp. 2915–2921 (2019)Ojala, T., Pietikainen, M., Maenpaa, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002)Popat, K., Mukherjee, S., Yates, A., Weikum, G.: DeClarE: debunking fake news and false claims using evidence-aware deep learning. In: EMNLP 2018, pp. 22–32 (2018)Rashkin, H., Choi, E., Jang, J.Y., Volkova, S., Choi, Y.: Truth of varying shades: analyzing language in fake news and political fact-checking. In: EMNLP 2017, pp. 2931–2937 (2017)Shu, K., Wang, S., Liu, H.: Understanding user profiles on social media for fake news detection. In: MIPR 2018, pp. 430–435 (2018)Shu, K., Mahudeswaran, D., Wang, S., Lee, D., Liu, H.: FakeNewsNet: a data repository with news content, social context and spatialtemporal information for studying fake news on social media. arXiv:1809.01286 (2018)Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: CVPR 2016, pp. 2818–2826 (2016)Tausczik, Y.R., Pennebaker, J.W.: The psychological meaning of words: LIWC and computerized text analysis methods. J. Lang. Soc. Psychol. 29(1), 24–54 (2010)Vosoughi, S., Roy, D., Aral, S.: The spread of true and false news online. Science 359(6380), 1146–1151 (2018)Wang, Y., et al.: EANN: event adversarial neural networks for multi-modal fake news detection. In: KDD 2018, pp. 849–857 (2018)Zhao, Z., et al.: An image-text consistency driven multimodal sentiment analysis approach for social media. Inf. Process. Manag. 56(6), 102097 (2019)Zlatkova, D., Nakov, P., Koychev, I.: Fact-checking meets fauxtography: verifying claims about images. In: EMNLP-IJCNLP 2019, pp. 2099–2108 (2019

    Seasonal variation in collective mood via Twitter content and medical purchases

    Get PDF
    The analysis of sentiment contained in vast amounts of Twitter messages has reliably shown seasonal patterns of variation in multiple studies, a finding that can have great importance in the understanding of seasonal affective disorders, particularly if related with known seasonal variations in certain hormones. An important question, however, is that of directly linking the signals coming from Twitter with other sources of evidence about average mood changes. Specifically we compare Twitter signals relative to anxiety, sadness, anger, and fatigue with purchase of items related to anxiety, stress and fatigue at a major UK Health and Beauty retailer. Results show that all of these signals are highly correlated and strongly seasonal, being under-expressed in the summer and over-expressed in the other seasons, with interesting differences and similarities across them. Anxiety signals, extracted from both Twitter and from Health product purchases, peak in spring and autumn, and correlate also with the purchase of stress remedies, while Twitter sadness has a peak in the Winter, along with Twitter anger and remedies for fatigue. Surprisingly, purchase of remedies for fatigue do not match the Twitter fatigue, suggesting that perhaps the names we give to these indicators are only approximate indications of what they actually measure. This study contributes both to the clarification of the mood signals contained in social media, and more generally to our understanding of seasonal cycles in collective mood

    SentiBench - a benchmark comparison of state-of-the-practice sentiment analysis methods

    Get PDF
    In the last few years thousands of scientific papers have investigated sentiment analysis, several startups that measure opinions on real data have emerged and a number of innovative products related to this theme have been developed. There are multiple methods for measuring sentiments, including lexical-based and supervised machine learning methods. Despite the vast interest on the theme and wide popularity of some methods, it is unclear which one is better for identifying the polarity (i.e., positive or negative) of a message. Accordingly, there is a strong need to conduct a thorough apple-to-apple comparison of sentiment analysis methods, \textit{as they are used in practice}, across multiple datasets originated from different data sources. Such a comparison is key for understanding the potential limitations, advantages, and disadvantages of popular methods. This article aims at filling this gap by presenting a benchmark comparison of twenty-four popular sentiment analysis methods (which we call the state-of-the-practice methods). Our evaluation is based on a benchmark of eighteen labeled datasets, covering messages posted on social networks, movie and product reviews, as well as opinions and comments in news articles. Our results highlight the extent to which the prediction performance of these methods varies considerably across datasets. Aiming at boosting the development of this research area, we open the methods' codes and datasets used in this article, deploying them in a benchmark system, which provides an open API for accessing and comparing sentence-level sentiment analysis methods

    Computational personality recognition in social media

    Get PDF
    A variety of approaches have been recently proposed to automatically infer users' personality from their user generated content in social media. Approaches differ in terms of the machine learning algorithms and the feature sets used, type of utilized footprint, and the social media environment used to collect the data. In this paper, we perform a comparative analysis of state-of-the-art computational personality recognition methods on a varied set of social media ground truth data from Facebook, Twitter and YouTube. We answer three questions: (1) Should personality prediction be treated as a multi-label prediction task (i.e., all personality traits of a given user are predicted at once), or should each trait be identified separately? (2) Which predictive features work well across different on-line environments? (3) What is the decay in accuracy when porting models trained in one social media environment to another

    Judgment of the Humanness of an Interlocutor Is in the Eye of the Beholder

    Get PDF
    Despite tremendous advances in artificial language synthesis, no machine has so far succeeded in deceiving a human. Most research focused on analyzing the behavior of “good” machine. We here choose an opposite strategy, by analyzing the behavior of “bad” humans, i.e., humans perceived as machine. The Loebner Prize in Artificial Intelligence features humans and artificial agents trying to convince judges on their humanness via computer-mediated communication. Using this setting as a model, we investigated here whether the linguistic behavior of human subjects perceived as non-human would enable us to identify some of the core parameters involved in the judgment of an agents' humanness. We analyzed descriptive and semantic aspects of dialogues in which subjects succeeded or failed to convince judges of their humanness. Using cognitive and emotional dimensions in a global behavioral characterization, we demonstrate important differences in the patterns of behavioral expressiveness of the judges whether they perceived their interlocutor as being human or machine. Furthermore, the indicators of interest displayed by the judges were predictive of the final judgment of humanness. Thus, we show that the judgment of an interlocutor's humanness during a social interaction depends not only on his behavior, but also on the judge himself. Our results thus demonstrate that the judgment of humanness is in the eye of the beholder

    Dynamics of Investor Communication in Equity Crowdfunding

    Get PDF
    In crowdfunding, start-ups can voluntarily communicate with their investors by posting updates. We investigate whether start-ups strategically use updates, which were previously shown to increase investments. To this end, we use hand-collected data of 751 updates and 39,036 investment decisions from the two major German equity crowdfunding portals Seedmatch and Companisto. We find evidence for strategic communication behavior of startups during an equity crowdfunding campaign. During the funding period, start-ups post updates with linguistic devices that enhance the group identity and the group cohesion. Furthermore, the probability of an update during the funding period increases with a strong competition of other contemporary crowdfunding campaigns
    corecore