254 research outputs found
Recommended from our members
Algorithmic Accountability Reporting: On the Investigation of Black Boxes
How can we characterize the power that various algorithms may exert on us? And how can we better understand when algorithms might be wronging us? What should be the role of journalists in holding that power to account? In this report I discuss what algorithms are and how they encode power. I then describe the idea of algorithmic accountability, first examining how algorithms problematize and sometimes stand in tension with transparency. Next, I describe how reverse engineering can provide an alternative way to characterize algorithmic power by delineating a conceptual model that captures different investigative scenarios based on reverse engineering algorithms’ input-output relationships. I then provide a number of illustrative cases and methodological details on how algorithmic accountability reporting might be realized in practice. I conclude with a discussion about broader issues of human resources, legality, ethics, and transparency
ChatGPT and the AI Act
It is not easy being a tech regulator these days. The European institutions are working hard towards finalising the AI Act in autumn, and then generative AI systems like ChatGPT come along! In this essay, we comment the European AI Act by arguing that its current risk-based approach is too limited for facing ChatGPT & co
Auditing News Curation Systems: A Case Study Examining Algorithmic and Editorial Logic in Apple News
This work presents an audit study of Apple News as a sociotechnical news
curation system that exercises gatekeeping power in the media. We examine the
mechanisms behind Apple News as well as the content presented in the app,
outlining the social, political, and economic implications of both aspects. We
focus on the Trending Stories section, which is algorithmically curated, and
the Top Stories section, which is human-curated. Results from a crowdsourced
audit showed minimal content personalization in the Trending Stories section,
and a sock-puppet audit showed no location-based content adaptation. Finally,
we perform an extended two-month data collection to compare the human-curated
Top Stories section with the algorithmically curated Trending Stories section.
Within these two sections, human curation outperformed algorithmic curation in
several measures of source diversity, concentration, and evenness. Furthermore,
algorithmic curation featured more "soft news" about celebrities and
entertainment, while editorial curation featured more news about policy and
international events. To our knowledge, this study provides the first
data-backed characterization of Apple News in the United States.Comment: Preprint, to appear in Proceedings of the Fourteenth International
AAAI Conference on Web and Social Media (ICWSM 2020
Negotiated Autonomy: The Role of Social Media Algorithms in Editorial Decision Making
Social media platforms have increasingly become an important way for news organizations to distribute content to their audiences. As news organizations relinquish control over distribution, they may feel the need to optimize their content to align with platform logics to ensure economic sustainability. However, the opaque and often proprietary nature of platform algorithms makes it hard for news organizations to truly know what kinds of content are preferred and will perform well. Invoking the concept of algorithmic ‘folk theories,’ this article presents a study of in-depth, semi-structured interviews with 18 U.S.-based news journalists and editors to understand how they make sense of social media algorithms, and to what extent this influences editorial decision making. Our findings suggest that while journalists’ understandings of platform algorithms create new considerations for gatekeeping practices, the extent to which it influences those practices is often negotiated against traditional journalistic conceptions of newsworthiness and journalistic autonomy
Understanding Practices around Computational News Discovery Tools in the Domain of Science Journalism
Science and technology journalists today face challenges in finding
newsworthy leads due to increased workloads, reduced resources, and expanding
scientific publishing ecosystems. Given this context, we explore computational
methods to aid these journalists' news discovery in terms of time-efficiency
and agency. In particular, we prototyped three computational information
subsidies into an interactive tool that we used as a probe to better understand
how such a tool may offer utility or more broadly shape the practices of
professional science journalists. Our findings highlight central considerations
around science journalists' agency, context, and responsibilities that such
tools can influence and could account for in design. Based on this, we suggest
design opportunities for greater and longer-term user agency; incorporating
contextual, personal and collaborative notions of newsworthiness; and
leveraging flexible interfaces and generative models. Overall, our findings
contribute a richer view of the sociotechnical system around computational news
discovery tools, and suggest ways to improve such tools to better support the
practices of science journalists.Comment: To be published in CSCW 202
Anticipating Impacts: Using Large-Scale Scenario Writing to Explore Diverse Implications of Generative AI in the News Environment
The tremendous rise of generative AI has reached every part of society -
including the news environment. There are many concerns about the individual
and societal impact of the increasing use of generative AI, including issues
such as disinformation and misinformation, discrimination, and the promotion of
social tensions. However, research on anticipating the impact of generative AI
is still in its infancy and mostly limited to the views of technology
developers and/or researchers. In this paper, we aim to broaden the perspective
and capture the expectations of three stakeholder groups (news consumers;
technology developers; content creators) about the potential negative impacts
of generative AI, as well as mitigation strategies to address these.
Methodologically, we apply scenario writing and use participatory foresight in
the context of a survey (n=119) to delve into cognitively diverse imaginations
of the future. We qualitatively analyze the scenarios using thematic analysis
to systematically map potential impacts of generative AI on the news
environment, potential mitigation strategies, and the role of stakeholders in
causing and mitigating these impacts. In addition, we measure respondents'
opinions on a specific mitigation strategy, namely transparency obligations as
suggested in Article 52 of the draft EU AI Act. We compare the results across
different stakeholder groups and elaborate on the (non-) presence of different
expected impacts across these groups. We conclude by discussing the usefulness
of scenario-writing and participatory foresight as a toolbox for generative AI
impact assessment
Storia: Summarizing Social Media Content based on Narrative Theory using Crowdsourcing
People from all over the world use social media to share thoughts and
opinions about events, and understanding what people say through these channels
has been of increasing interest to researchers, journalists, and marketers
alike. However, while automatically generated summaries enable people to
consume large amounts of data efficiently, they do not provide the context
needed for a viewer to fully understand an event. Narrative structure can
provide templates for the order and manner in which this data is presented to
create stories that are oriented around narrative elements rather than
summaries made up of facts. In this paper, we use narrative theory as a
framework for identifying the links between social media content. To do this,
we designed crowdsourcing tasks to generate summaries of events based on
commonly used narrative templates. In a controlled study, for certain types of
events, people were more emotionally engaged with stories created with
narrative structure and were also more likely to recommend them to others
compared to summaries created without narrative structure
Optimizing Content with A/B Headline Testing: Changing Newsroom Practices
Audience analytics are an increasingly essential part of the modern newsroom as publishers seek to maximize the reach and commercial potential of their content. On top of a wealth of audience data collected, algorithmic approaches can then be applied with an eye towards predicting and optimizing the performance of content based on historical patterns. This work focuses specifically on content optimization practices surrounding the use of A/B headline testing in newsrooms. Using such approaches, digital newsrooms might audience-test as many as a dozen headlines per article, collecting data that allows an optimization algorithm to converge on the headline that is best with respect to some metric, such as the click-through rate. This article presents the results of an interview study which illuminate the ways in which A/B testing algorithms are changing workflow and headline writing practices, as well as the social dynamics shaping this process and its implementation within US newsrooms
- …