The spread of misinformation or fake-news is a global concern that undermines
progress on issues such as protecting democracy and public health. Past
research aiming to combat its spread has largely focused on identifying its
semantic content and media outlets publishing such news. In contrast, we aim to
identify individuals who are more likely to share fake-news by studying the
language of actors in the fake-news ecosystem (such as fake-news sharers,
fact-check sharers and random twitter users), and creating a linguistic profile
of them. Fake-news sharers and fact-check sharers use significantly more
high-arousal negative emotions in their language, but fake-news sharers express
more existentially-based needs than other actors. Incorporating
psycholinguistic cues as inferred from their tweets into a model of
socio-demographic predictors considerably improves classification accuracy of
fake-news sharers. The finding that fake-news sharers differ in important ways
from other actors in the fake-news ecosystem (such as in their existential
needs), but are also similar to them in other ways (such as in their anger
levels), highlights the importance of studying the entire fake-news ecosystem
to increase accuracy in identification and prediction. Our approach can help
mitigate fake-news sharing by enabling platforms to pre-emptively screen
potential fake-news sharers' posts.Comment: 34 pages, 6 figures and 6 table