University of Illinois at Chicago

University of Illinois at Chicago: Journals@UIC
Not a member yet
    5060 research outputs found

    TIKTOK’S AI HYPE - CREATORS’ ROLE IN SHAPING (PUBLIC) AI IMAGINARIES

    No full text
    Artificial Intelligence (AI), often hailed as a transformative force, has become an ambivalent buzzword, simultaneously promising utopian possibilities and fueling dystopian anxieties. Social media platforms have emerged as pivotal spaces where the public narrative about AI takes shape, especially through content creators, significantly influencing our collective vision of the future with AI. Therefore, this paper inquires into the role of creators in shaping public imaginaries of AI through their AI content. The paper is based on TikTok as a site of entrance for investigating the role of creators in shaping ongoing discourses around AI through short video content. To understand the role of creators within this ongoing AI discourse, a hashtag network analysis is paired with a critical discourse analysis of creators’ AI content. The preliminary results show three dominant genres of AI content based on 1) AI tools output, especially visual content, 2) listicles on AI tools for different tasks, and 3) educational and critical AI content. Considering the creator types behind the content, a high amount of content is produced by content farms followed by tech TokTokers. Media outlets and commentary TikTokers dominate the third content section. Overall, four types of AI imaginaries are foregrounded. AI mystification envisions AI as fast-paced and inherently life-changing. Similarly, AI futuristic content makes AI out as inevitable. Contrastingly, a high AI pragmatism is prevalent in the ongoing tool discourse, while critical and educational content counteracts these imaginaries with a strong AI realism highlighting the complex and nuanced aspect of AI

    TRUST ISSUES AND RESPONSIBILITIES: SOCIAL IMAGINARIES, RISK, AND USER LABOUR IN DIGITAL BANKING APPS

    No full text
    This paper draws upon conceptual frameworks of platformisation (van Dijck, Poell, and de Waal, 2018), media convergence (Jensen, 2022), trust in digital banking (Mezei and Verteș-Olteanu, 2020; van Esterik-Plasmeijer and van Raaij, 2017), and social imaginaries (James, 2019; Mansell, 2012; Gillespie, 2018). It views digital banking apps as platforms that enable personalised interactions (Poell, Nieborg, and van Dijck, 2019), and aim to investigate the datafication (van Dijck, 2014; Sadowski, 2019) and platformisation of banking. This approach underscores the transformation of service dynamics and the challenges brought by digital banking concerning public accessibility and social inclusion (Swartz, 2020). We ask: a) What are the dominant imaginaries of payment reflected by contemporary financial services? and b) How do the design and affordances of digital payment services impact trust, responsibility, and user labour? This paper employs a modified walkthrough method (Light, Burgess, and Duguay, 2018) including detailed content analysis of the Terms and Conditions (T&Cs) documents required for initial access to seven digital banking apps in Ireland. The sampled banking apps include Bank of Ireland (BOI), N26, An Post Money, Revolut IE, Chase UK, Starling Bank UK, and Klarna. The modified walkthroughs highlight a significant convergence between the finance and media industries. Our analysis identified three dominant social imaginaries of payment leading to different designs for digital banking apps: a) the Institutional Imaginary, b) the Transactional Imaginary, and c) the Digital Imaginary

    GPT4 V THE OVERSIGHT BOARD: USING LARGE LANGUAGE MODELS FOR CONTENT MODERATION

    No full text
    Large-scale automated content moderation on major social media platforms continues to be highly controversial. Moderation and curation are central to the value propositions that platforms provide, but companies have struggled to convincingly demonstrate that their automated systems are fair and effective. For a long time, the limitations of automated content classifiers in dealing with borderline cases have seemed intractable. With the recent expansion in the capabilities and availability of large language models, however, there is reason to suspect that more nuanced automated assessment of content in context may now be possible. In this paper, we set out to understand how the emergence of generative AI tools might transform industrial content moderation practices. We investigate whether the current generation of pre-trained foundation models may expand the established boundaries of the types of tasks that are considered amenable to automation in content moderation. This paper presents the results of a pilot study into the potential use of GPT4 for content moderation. We use the hate speech decisions of Meta’s Oversight Board as examples of covert hate speech and counterspeech that have proven difficult for existing automated tools. Our preliminary results suggest that, given a generic prompt and Meta’s hate speech policies, GPT4 can approximate the decisions and accompanying explanations of the Oversight Board in almost all current cases. We interrogate several clear challenges and limitations, including particularly the sensitivity of variations in prompting, options for validating answers, and generalisability to examples with unseen content

    POLICY AT ODDS- DIGITAL INDIA VERSUS INTERNET SHUTDOWNS

    No full text
    The government of India, in 2014, launched a flagship programme called Digital India, with a vision to transform India into a ‘digitally empowered society and knowledge economy’. A similar intention is reflected in another initiative called BharatNet. The USO Fund was established with the fundamental objective of providing access to telegraph services, including mobile services, broadband connectivity and ICT infrastructure creation in rural and remote areas. However, it is interesting to note that India has also been notorious at shutting down the internet. In Access Now’s report (2021), India has consistently raked number one, in the total number of hours spent under internet shutdown. In India, shutdowns have occurred during citizen protests like the anti- CAA protests in late 2020, and early 2021, and the Farmers Protest in 2021. When we look at these two actions by the government- one, to digitise governance and provide connectivity to all citizens to digitally empower them allow for the participation in networked economy; and two, to disrupt these very connections when citizens use networks to express dissent. In this paper I take a closer look at the policy documents and the building of information infrastructure that is written into it. While also studying how infrastructure gets suspended during dissenting movements led by citizens. I use the case of the two protests to understand this disruption. Using newspaper analysis and interviews, I examine how infrastructure gets denied and disrupted to citizens when digital networks are used in ways that are unintended by the government

    AUTOMODERATOR AS AN EXAMPLE OF COMMUNITY DRIVEN PRODUCT DESIGN

    No full text
    Rushes to adopt the latest technologies to the field of community moderation are generally inequitable for volunteer communities. The closed-door nature of product development at the majority of tech companies means that the logic underlying the creation of new features is opaque. What does this mean for those who want to equitably employ newer technologies in service of volunteer moderators? We present the development and deployment of the Wikimedia Foundation’s Automoderator product as a contemporary alternative to product development processes. We focus on the collaborative process undertaken between the Moderator Tools product team at the Wikimedia Foundation, and volunteer moderator communities, to design and build Automoderator. Automoderator is an automated anti-vandalism tool, which uses a language-agnostic ML model that predicts the probability of an edit being reverted. The product team integrated volunteer feedback and direction on a continuous basis. This included the use of existing community-created tools to guide Automoderator's direction, the creation and dissemination of a spreadsheet-based testing tool, soliciting user feedback on a central project page, and integrating an extension to allow communities to control Automoderator's behavior directly. We conclude by discussing the limitations and trade-offs of this approach to product development

    AMBIVALENT AFFECTIVE LABOUR, DATAFICATION OF QING AND DANMEI WRITERS IN THE CULTURAL INDUSTRY

    No full text
    Danmei 耽美 culture, which features male-male romance and/or erotica, emerged in mainland China in the late 1990s and has been flourishing since the 2010s across East and Southeast Asia. The dynamic Chinese danmei culture has received significant attention in academia in recent years, either by mapping out the resistant potential against the heteronormativity or by highlighting the escapist route for expressing the women participants’ desires. The danmei culture has evolved into a transmedia landscape, and at the same time, an ever-expanding cultural industry being exploited by the logic of capital. Danmei writers as affective labors living in such a cultural industry have been rarely considered in present danmei studies. Through exploring the datafication of qing (情, affects and desires) via in-depth interviews with contracted danmei writers on Jinjiang, I examine the distinct feature of danmei writers as ambivalent affective labor. For danmei writers, the datafication and monetization of qing leads to increasingly formulaic writing. By selecting, appropriating and combing the elements in the database of qing, danmei writers are able to swiftly generate a male homoerotic love story that efficiently and effectively invoke the affects and desires of readers for better monetization. Pleasures and pains are both involved in doing the ambivalent affective labor, which further consolidates the precariousness of the affective labour. However, the affects and desires per se cannot be fully manipulated – transformative momentum is embedded in the water-like qing all the time

    WRAP YOUR HEAD AROUND IT: BRAZILIAN USERS’ ALGORITHMIC IMAGINARIES OF SPOTIFY WRAPPED

    No full text
    ‘Spotify Wrapped’ is a promotional initiative offered by the music platform consisting of a summary of each user’s yearly listening habits. Although Spotify is generally classified as a streaming service, initiatives such as Wrapped have a clear component of sociability (Hagen & Lüders 2017) – in this case, not only because they are based on the harvesting of users’ behavioural data but also because they are created to be shared on platforms such as Instagram and Twitter/X. Indeed, Spotify Wrapped has acquired its own role in digital popular culture, inciting anticipation and excitement from users worldwide and becoming an 'algorithmic event' (Annabell & Vindum Rasmussen 2023) in and of itself. In this paper, we propose to scrutinise how this algorithmic event is perceived and understood by Brazilian users, whilst also identifying and unpacking the platform affordances and algorithmic imaginaries (Bucher, 2017) that inform those interpretations and their associated performances of taste and identity (Airoldi, 2019, Prey, 2018). We explore in particular how users negotiate the tensions between algorithmic personalisation and individuation and the possibilities for shared experience to emerge during this event. Through a mixed-method approach, we argue that the 'eventness' (Frosh and Pinchevski, 2018) of Spotify Wrapped is distributed, clustered but sparsely connected, and marked by fleeting, fluid and ephemeral feelings of shared experience and recognition rather than by enduring communities, which in turn reflects and extends previous theorisations of affective publics (Papacharissi, 2014) and social media liveness (Lupinacci, 2021

    Detection of LLM-powered bots using image classification

    No full text
    In the rapidly changing landscape of online social interactions, the presence of automated accounts, or bots, has always posed a significant challenge to maintaining platforms where the information posted is authentic and reliable. The emergence of large language models (LLMs) may exacerbate this problem, as researchers have recently found families of automated accounts that use generative artificial intelligence to produce their posts. This paper focuses on this new type of bot, particularly on detecting them. Using a new detection technique that relies on image classification to distinguish between human and automated accounts, we demonstrate remarkable efficiency in identifying bots whose posts are generated by large language models. Our research improves the results of previous work on the detection of bot accounts powered by generative artificial intelligence.    

    Algorithmic issue publics: A framing analysis of Alternative für Deutschland’s Facebook page

    No full text
    In light of the recent electoral outcomes in Germany, among other countries, understanding how content visibility intersects with user engagement on digital media platforms is crucial for contemporary democracies. Building on the extensive literature in this field, this paper empirically examines Alternative fur Deutschland’s (AfD) Facebook communication between August 2023 and March 2024, using a framing analysis and user engagement metrics. Three central communication frames are identified: 1) Socio-economic frame: The promise of change, 2) Against the elite frame: Challenging the status quo, and 3) Nation security frame: Migration as a threat. While the content of the themes may be less surprising, their trajectories may be explained by taking into account how users engage with what we term algorithmic issue publics. In conclusion, we call for further research that explores the intersection of framing practices, user engagement, and platform logics, and more specifically, their relation to automatised content creation

    From autocomplete to ChatGPT: Assessing responsibility in a new era of automated information retrieval

    No full text
    With the rise of generative AI, information retrieval systems are evolving — shifting from search engines that retrieve and suggest links to content, to platforms that generate answers using AI. This presents new challenges for assigning responsibility for the information these systems deliver to the public. This paper examines the legal and ethical questions of accountability that arise in the automation of information retrieval, using disputes over Google’s Autocomplete feature as an early and instructive example of automated prediction in search. We review three defamation cases brought against Google in Australia, Hong Kong, and Germany and explore how courts have grappled with questions of responsibility for harm when algorithmic systems are implicated in the production and dissemination of information. We argue that these cases offer valuable insights for the governance of contemporary AI-driven information retrieval systems, particularly those using large language models (LLMs). We consider responsibility for harm prevention at both the individual and organisational level and assess the epistemic responsibility of Google as a provider of socio-technical infrastructures that shape public knowledge

    235

    full texts

    5,060

    metadata records
    Updated in last 30 days.
    University of Illinois at Chicago: Journals@UIC is based in United States
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇