4 research outputs found
Recommended from our members
Controversy Analysis and Detection
Seeking information on a controversial topic is often a complex task. Alerting users about controversial search results can encourage critical literacy, promote healthy civic discourse and counteract the filter bubble effect, and therefore would be a useful feature in a search engine or browser extension. Additionally, presenting information to the user about the different stances or sides of the debate can help her navigate the landscape of search results beyond a simple list of 10 links . This thesis has made strides in the emerging niche of controversy detection and analysis. The body of work in this thesis revolves around two themes: computational models of controversy, and controversies occurring in neighborhoods of topics. Our broad contributions are: (1) Presenting a theoretical framework for modeling controversy as contention among populations; (2) Constructing the first automated approach to detecting controversy on the web, using a KNN classifier that maps from the web to similar Wikipedia articles; and (3) Proposing a novel controversy detection in Wikipedia by employing a stacked model using a combination of link structure and similarity. We conclude this work by discussing the challenging technical, societal and ethical implications of this emerging research area and proposing avenues for future work
Current and Near-Term AI as a Potential Existential Risk Factor
There is a substantial and ever-growing corpus of evidence and literature
exploring the impacts of Artificial intelligence (AI) technologies on society,
politics, and humanity as a whole. A separate, parallel body of work has
explored existential risks to humanity, including but not limited to that
stemming from unaligned Artificial General Intelligence (AGI). In this paper,
we problematise the notion that current and near-term artificial intelligence
technologies have the potential to contribute to existential risk by acting as
intermediate risk factors, and that this potential is not limited to the
unaligned AGI scenario. We propose the hypothesis that certain
already-documented effects of AI can act as existential risk factors,
magnifying the likelihood of previously identified sources of existential risk.
Moreover, future developments in the coming decade hold the potential to
significantly exacerbate these risk factors, even in the absence of artificial
general intelligence. Our main contribution is a (non-exhaustive) exposition of
potential AI risk factors and the causal relationships between them, focusing
on how AI can affect power dynamics and information security. This exposition
demonstrates that there exist causal pathways from AI systems to existential
risks that do not presuppose hypothetical future AI capabilities
Information ecosystem threats in minoritized communities: challenges, open problems and research directions
Journalists, fact-checkers, academics, and community media are overwhelmed in their attempts to support communities suffering from gender-, race- and ethnicity-targeted information ecosystem threats, including but not limited to misinformation, hate speech, weaponized controversy and online-to-offline harassment. Yet, for a plethora of reasons, minoritized groups are underserved by current approaches to combat such threats. In this panel, we will present and discuss the challenges and open problems facing such communities and the researchers hoping to serve them. We will also discuss the current state-of-the-art as well as the most promising future directions, both within IR specifically, across Computer Science more broadly, as well as that requiring transdisciplinary and cross-sectoral collaborations. The panel will attract both IR practitioners and researchers and include at least one panelist outside of IR, with unique expertise in this space