6,206 research outputs found
Search Bias Quantification: Investigating Political Bias in Social Media and Web Search
Users frequently use search systems on the Web as well as online social media to learn about ongoing events and public opinion on personalities. Prior studies have shown that the top-ranked results returned by these search engines can shape user opinion about the topic (e.g., event or person) being searched. In case of polarizing topics like politics, where multiple competing perspectives exist, the political bias in the top search results can play a significant role in shaping public opinion towards (or away from) certain perspectives. Given the considerable impact that search bias can have on the user, we propose a generalizable search bias quantification framework that not only measures the political bias in ranked list output by the search system but also decouples the bias introduced by the different sources—input data and ranking system. We apply our framework to study the political bias in searches related to 2016 US Presidential primaries in Twitter social media search and find that both input data and ranking system matter in determining the final search output bias seen by the users. And finally, we use the framework to compare the relative bias for two popular search systems—Twitter social media search and Google web search—for queries related to politicians and political events. We end by discussing some potential solutions to signal the bias in the search results to make the users more aware of them.publishe
On Measuring Bias in Online Information
Bias in online information has recently become a pressing issue, with search
engines, social networks and recommendation services being accused of
exhibiting some form of bias. In this vision paper, we make the case for a
systematic approach towards measuring bias. To this end, we discuss formal
measures for quantifying the various types of bias, we outline the system
components necessary for realizing them, and we highlight the related research
challenges and open problems.Comment: 6 pages, 1 figur
Measuring Online Social Bubbles
Social media have quickly become a prevalent channel to access information,
spread ideas, and influence opinions. However, it has been suggested that
social and algorithmic filtering may cause exposure to less diverse points of
view, and even foster polarization and misinformation. Here we explore and
validate this hypothesis quantitatively for the first time, at the collective
and individual levels, by mining three massive datasets of web traffic, search
logs, and Twitter posts. Our analysis shows that collectively, people access
information from a significantly narrower spectrum of sources through social
media and email, compared to search. The significance of this finding for
individual exposure is revealed by investigating the relationship between the
diversity of information sources experienced by users at the collective and
individual level. There is a strong correlation between collective and
individual diversity, supporting the notion that when we use social media we
find ourselves inside "social bubbles". Our results could lead to a deeper
understanding of how technology biases our exposure to new information
Quantifying Biases in Online Information Exposure
Our consumption of online information is mediated by filtering, ranking, and
recommendation algorithms that introduce unintentional biases as they attempt
to deliver relevant and engaging content. It has been suggested that our
reliance on online technologies such as search engines and social media may
limit exposure to diverse points of view and make us vulnerable to manipulation
by disinformation. In this paper, we mine a massive dataset of Web traffic to
quantify two kinds of bias: (i) homogeneity bias, which is the tendency to
consume content from a narrow set of information sources, and (ii) popularity
bias, which is the selective exposure to content from top sites. Our analysis
reveals different bias levels across several widely used Web platforms. Search
exposes users to a diverse set of sources, while social media traffic tends to
exhibit high popularity and homogeneity bias. When we focus our analysis on
traffic to news sites, we find higher levels of popularity bias, with smaller
differences across applications. Overall, our results quantify the extent to
which our choices of online systems confine us inside "social bubbles."Comment: 25 pages, 10 figures, to appear in the Journal of the Association for
Information Science and Technology (JASIST
Recommended from our members
We are the Change that we Seek: Information Interactions During a Change of Viewpoint
There has been considerable hype about filter bubbles and echo chambers influencing the views of information consumers. The fear is that these technologies are undermining democracy by swaying opinion and creating an uninformed, polarised populace. The literature in this space is mostly techno-centric, addressing the impact of technology. In contrast, our work is the first research in the information interaction field to examine changing viewpoints from a human-centric perspective. It provides a new understanding of view change and how we might support informed, autonomous view change behaviour. We interviewed 18 participants about a self-identified change of view, and the information touchpoints they engaged with along the way. In this paper we present the information types and sources that informed changes of viewpoint, and the ways in which our participants interacted with that information. We describe our findings in the context of the techno-centric literature and suggest principles for designing digital information environments that support user autonomy and reflection in viewpoint formation
Auditing News Curation Systems: A Case Study Examining Algorithmic and Editorial Logic in Apple News
This work presents an audit study of Apple News as a sociotechnical news
curation system that exercises gatekeeping power in the media. We examine the
mechanisms behind Apple News as well as the content presented in the app,
outlining the social, political, and economic implications of both aspects. We
focus on the Trending Stories section, which is algorithmically curated, and
the Top Stories section, which is human-curated. Results from a crowdsourced
audit showed minimal content personalization in the Trending Stories section,
and a sock-puppet audit showed no location-based content adaptation. Finally,
we perform an extended two-month data collection to compare the human-curated
Top Stories section with the algorithmically curated Trending Stories section.
Within these two sections, human curation outperformed algorithmic curation in
several measures of source diversity, concentration, and evenness. Furthermore,
algorithmic curation featured more "soft news" about celebrities and
entertainment, while editorial curation featured more news about policy and
international events. To our knowledge, this study provides the first
data-backed characterization of Apple News in the United States.Comment: Preprint, to appear in Proceedings of the Fourteenth International
AAAI Conference on Web and Social Media (ICWSM 2020
Measuring the Importance of User-Generated Content to Search Engines
Search engines are some of the most popular and profitable intelligent
technologies in existence. Recent research, however, has suggested that search
engines may be surprisingly dependent on user-created content like Wikipedia
articles to address user information needs. In this paper, we perform a
rigorous audit of the extent to which Google leverages Wikipedia and other
user-generated content to respond to queries. Analyzing results for six types
of important queries (e.g. most popular, trending, expensive advertising), we
observe that Wikipedia appears in over 80% of results pages for some query
types and is by far the most prevalent individual content source across all
query types. More generally, our results provide empirical information to
inform a nascent but rapidly-growing debate surrounding a highly-consequential
question: Do users provide enough value to intelligent technologies that they
should receive more of the economic benefits from intelligent technologies?Comment: This version includes a bibliography entry that was missing from the
first version of the text due to a processing error. This is a preprint of a
paper accepted at ICWSM 2019. Please cite that version instea
- …