3 research outputs found
Recommended from our members
Exploring audience perceptions of, and preferences for, data-driven ‘quantitative’ journalism
Although data-driven ‘quantitative' journalism has increased in volume and visibility, little is known about how it is perceived and evaluated by audiences. This study helps fill this research gap by analysing the characteristics of quantitative journalism that a diverse group of 31 news consumers pay attention to and, within those characteristics, where their preferences might lie. In eight group interviews, participants read and discussed articles chosen to represent the diversity that exists in the forms and production of data-driven journalism. Our analysis reveals 28 perception criteria that we group into four major categories: antecedents of perception, emotional and cognitive impacts, article composition, and news and editorial values. Several criteria have not been used in prior research on the perception of quantitative journalism. Our criteria have obvious application in future research on how audiences perceive different types of quantitative journalism, including that produced with the help of automation. The criteria will be of interest too for researchers studying audience perceptions and evaluations of news in general. For journalists and others communicating with numbers, our findings indicate what audiences might want from data-driven journalism, including that it is constructive, concise, provides analysis, has a human angle, and includes visual elements
Recommended from our members
Audience reception of news articles made with various levels of automation—and none: Comparing cognitive & emotional impacts
Our knowledge about audience perceptions of manually authored news articles and automated news articles is limited. Although over a dozen studies have been carried out, findings are inconsistent and limited by methodological shortcomings. For example, the experimental stimuli used in some have made isolation of the effects of the actual authorship (automated or manual) difficult. Our study attempts to overcome previous studies’ shortcomings to better evaluate audiences’ relative evaluations of news articles produced with varying degrees of automation—and none. We conducted a 3 (article source: manually written, automated, post-edited) × 12 (story topics) between-subjects online survey experiment using a sample (N = 4,734) representative of UK online news consumers by age and gender. Each of the 36 treatment groups read a data-driven news article that was either: (1) manually written by a journalist, (2) automated using a data-driven template, or (3) automated then subsequently post-edited by a journalist. The articles’ authorship was not declared. To minimise confounding variables, the articles in each of the 12 story sets shared the same data source, story angle, and geographical focus. Respondents’ perceptions were measured using criteria developed in a qualitative group interview study with news consumers. The results show that respondents found manually written articles to be significantly more comprehensible—both overall and in relation to the numbers they contained—than automated and post-edited articles. Authorship did not have any statistically significant effect on overall liking of the articles, or on the positive or negative feelings (valence) articles provoked in respondents, or the strength of those feelings (arousal)