154 research outputs found
Deep Multimodal Image-Repurposing Detection
Nefarious actors on social media and other platforms often spread rumors and
falsehoods through images whose metadata (e.g., captions) have been modified to
provide visual substantiation of the rumor/falsehood. This type of modification
is referred to as image repurposing, in which often an unmanipulated image is
published along with incorrect or manipulated metadata to serve the actor's
ulterior motives. We present the Multimodal Entity Image Repurposing (MEIR)
dataset, a substantially challenging dataset over that which has been
previously available to support research into image repurposing detection. The
new dataset includes location, person, and organization manipulations on
real-world data sourced from Flickr. We also present a novel, end-to-end, deep
multimodal learning model for assessing the integrity of an image by combining
information extracted from the image with related information from a knowledge
base. The proposed method is compared against state-of-the-art techniques on
existing datasets as well as MEIR, where it outperforms existing methods across
the board, with AUC improvement up to 0.23.Comment: To be published at ACM Multimeda 2018 (orals
Web Video Verification using Contextual Cues
As news agencies and the public increasingly rely on User-Generated Content, content verification is vital for news producers and consumers alike. We present a novel approach for verifying Web videos by analyzing their online context. It is based on supervised learning on contextual features: one feature set is based on an existing approach for tweet verification adapted to video comments. The other is based on video metadata, such as the video description, likes/dislikes, and uploader information.
We evaluate both on a dataset of real and fake videos from YouTube, and demonstrate their effectiveness (F-scores: 0.82, 0.79). We then explore their complementarity and show that under an optimal fusion scheme, the classifier would reach an F-score of 0.9. We finally study the performance of the classifier through time, as more comments accumulate, emulating a real-time verification setting
Improving Generalization for Multimodal Fake News Detection
The increasing proliferation of misinformation and its alarming impact have
motivated both industry and academia to develop approaches for fake news
detection. However, state-of-the-art approaches are usually trained on datasets
of smaller size or with a limited set of specific topics. As a consequence,
these models lack generalization capabilities and are not applicable to
real-world data. In this paper, we propose three models that adopt and
fine-tune state-of-the-art multimodal transformers for multimodal fake news
detection. We conduct an in-depth analysis by manipulating the input data aimed
to explore models performance in realistic use cases on social media. Our study
across multiple models demonstrates that these systems suffer significant
performance drops against manipulated data. To reduce the bias and improve
model generalization, we suggest training data augmentation to conduct more
meaningful experiments for fake news detection on social media. The proposed
data augmentation techniques enable models to generalize better and yield
improved state-of-the-art results.Comment: This paper has been accepted for ICMR 202
Visual and Textual Analysis for Image Trustworthiness Assessment within Online News
The majority of news published online presents one or more images or videos, which make the news more easily consumed and therefore more attractive to huge audiences. As a consequence, news with catchy multimedia content can be spread and get viral extremely quickly. Unfortunately, the availability and sophistication of photo editing software are erasing the line between pristine and manipulated content. Given that images have the power of bias and influence the opinion and behavior of readers, the need of automatic techniques to assess the authenticity of images is straightforward. This paper aims at detecting images published within online news that have either been maliciously modified or that do not represent accurately the event the news is mentioning. The proposed approach composes image forensic algorithms for detecting image tampering, and textual analysis as a verifier of images that are misaligned to textual content. Furthermore, textual analysis can be considered as a complementary source of information supporting image forensics techniques when they falsely detect or falsely ignore image tampering due to heavy image postprocessing. The devised method is tested on three datasets. The performance on the first two shows interesting results, with F1-score generally higher than 75%. The third dataset has an exploratory intent; in fact, although showing that the methodology is not ready for completely unsupervised scenarios, it is possible to investigate possible problems and controversial cases that might arise in real-world scenarios
Cross-Lingual Cross-Platform Rumor Verification Pivoting on Multimedia Content
With the increasing popularity of smart devices, rumors with multimedia
content become more and more common on social networks. The multimedia
information usually makes rumors look more convincing. Therefore, finding an
automatic approach to verify rumors with multimedia content is a pressing task.
Previous rumor verification research only utilizes multimedia as input
features. We propose not to use the multimedia content but to find external
information in other news platforms pivoting on it. We introduce a new features
set, cross-lingual cross-platform features that leverage the semantic
similarity between the rumors and the external information. When implemented,
machine learning methods utilizing such features achieved the state-of-the-art
rumor verification results
Multimodal Fake News Detection with Textual, Visual and Semantic Information
[EN] Recent years have seen a rapid growth in the number of fake
news that are posted online. Fake news detection is very challenging since they are usually created to contain a mixture of false and real information and images that have been manipulated that confuses the readers. In this paper, we propose a multimodal system with the aim to di erentiate between fake and real posts. Our system is based on a neural network and combines textual, visual and semantic information. The textual information is extracted from the content of the post, the visual one from the image that is associated with the post and the semantic refers to the similarity between the image and the text of the post. We conduct our experiments on three standard real world collections and we show the importance of those features on detecting fake news.Anastasia Giachanou is supported by the SNSF Early Postdoc Mobility grant under the project Early Fake News Detection on Social Media, Switzerland (P2TIP2 181441). Guobiao Zhang is funded by China Scholarship Council (CSC) from the Ministry of Education of P.R. China. The work of Paolo Rosso is partially funded by the Spanish MICINN under the research project MISMIS-FAKEnHATE on Misinformation and Miscommunication in social media: FAKE news and HATE speech (PGC2018-096212-B-C31)Giachanou, A.; Zhang, G.; Rosso, P. (2020). Multimodal Fake News Detection with Textual, Visual and Semantic Information. Springer. 30-38. https://doi.org/10.1007/978-3-030-58323-1_3S3038Boididou, C., et al.: Verifying multimedia use at MediaEval 2015. In: MediaEval 2015 Workshop, pp. 235–237 (2015)Castillo, C., Mendoza, M., Poblete, B.: Information credibility on Twitter. In: WWW 2011, pp. 675–684 (2011)Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: CVPR 2017, pp. 1251–1258 (2017)Davidson, T., Warmsley, D., Macy, M., Weber, I.: Automated hate speech detection and the problem of offensive language. In: ICWSM 2017 (2017)Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR 2009, pp. 248–255 (2009)Ghanem, B., Rosso, P., Rangel, F.: An emotional analysis of false information in social media and news articles. ACM Trans. Internet Technol. (TOIT) 20(2), 1–18 (2020)Giachanou, A., Gonzalo, J., Mele, I., Crestani, F.: Sentiment propagation for predicting reputation polarity. In: Jose, J.M., et al. (eds.) ECIR 2017. LNCS, vol. 10193, pp. 226–238. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-56608-5_18Giachanou, A., RÃssola, E.A., Ghanem, B., Crestani, F., Rosso, P.: The role of personality and linguistic patterns in discriminating between fake news spreaders and fact checkers. In: Métais, E., Meziane, F., Horacek, H., Cimiano, P. (eds.) NLDB 2020. LNCS, vol. 12089, pp. 181–192. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51310-8_17Giachanou, A., Rosso, P., Crestani, F.: Leveraging emotional signals for credibility detection. In: SIGIR 2019, pp. 877–880 (2019)He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR 2016, pp. 770–778 (2016)Huang, D., Shan, C., Ardabilian, M., Wang, Y., Chen, L.: Local binary patterns and its application to facial image analysis: a survey. IEEE Trans. Syst. Man Cybern. Part C 41(6), 765–781 (2011)Khattar, D., Goud, J.S., Gupta, M., Varma, V.: MVAE: multimodal variational autoencoder for fake news detection. In: WWW 2019, pp. 2915–2921 (2019)Ojala, T., Pietikainen, M., Maenpaa, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002)Popat, K., Mukherjee, S., Yates, A., Weikum, G.: DeClarE: debunking fake news and false claims using evidence-aware deep learning. In: EMNLP 2018, pp. 22–32 (2018)Rashkin, H., Choi, E., Jang, J.Y., Volkova, S., Choi, Y.: Truth of varying shades: analyzing language in fake news and political fact-checking. In: EMNLP 2017, pp. 2931–2937 (2017)Shu, K., Wang, S., Liu, H.: Understanding user profiles on social media for fake news detection. In: MIPR 2018, pp. 430–435 (2018)Shu, K., Mahudeswaran, D., Wang, S., Lee, D., Liu, H.: FakeNewsNet: a data repository with news content, social context and spatialtemporal information for studying fake news on social media. arXiv:1809.01286 (2018)Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: CVPR 2016, pp. 2818–2826 (2016)Tausczik, Y.R., Pennebaker, J.W.: The psychological meaning of words: LIWC and computerized text analysis methods. J. Lang. Soc. Psychol. 29(1), 24–54 (2010)Vosoughi, S., Roy, D., Aral, S.: The spread of true and false news online. Science 359(6380), 1146–1151 (2018)Wang, Y., et al.: EANN: event adversarial neural networks for multi-modal fake news detection. In: KDD 2018, pp. 849–857 (2018)Zhao, Z., et al.: An image-text consistency driven multimodal sentiment analysis approach for social media. Inf. Process. Manag. 56(6), 102097 (2019)Zlatkova, D., Nakov, P., Koychev, I.: Fact-checking meets fauxtography: verifying claims about images. In: EMNLP-IJCNLP 2019, pp. 2099–2108 (2019
- …