6 research outputs found

    Even Turing Should Sometimes Not Be Able To Tell: Mimicking Humanoid Usage Behavior for Exploratory Studies of Online Services

    Get PDF
    Online services such as social networks, online shops, and search engines deliver different content to users depending on their location, browsing history, or client device. Since these services have a major influence on opinion forming, understanding their behavior from a social science perspective is of greatest importance. In addition, technical aspects of services such as security or privacy are becoming more and more relevant for users, providers, and researchers. Due to the lack of essential data sets, automatic black box testing of online services is currently the only way for researchers to investigate these services in a methodical and reproducible manner. However, automatic black box testing of online services is difficult since many of them try to detect and block automated requests to prevent bots from accessing them. In this paper, we introduce a testing tool that allows researchers to create and automatically run experiments for exploratory studies of online services. The testing tool performs programmed user interactions in such a manner that it can hardly be distinguished from a human user. To evaluate our tool, we conducted - among other things - a large-scale research study on Risk-based Authentication (RBA), which required human-like behavior from the client. We were able to circumvent the bot detection of the investigated online services with the experiments. As this demonstrates the potential of the presented testing tool, it remains to the responsibility of its users to balance the conflicting interests between researchers and service providers as well as to check whether their research programs remain undetected

    Programmatic Dreams: Technographic Inquiry into Censorship of Chinese Chatbots

    Get PDF
    This project explores the recent censorship of two Chinese artificial intelligence (AI) chatbots on Tencent’s popular WeChat messaging platform. Specifically, I am advancing a technographic approach in ways that give agency to bots as not just computing units but as interlocutors and informants. I seek to understand these chatbots through their intended design—by chatting with them. I argue that this methodological inquiry of chatbots can potentially points to fissures and deficiencies within the Chinese censorship machine that allows for spaces of subversion. AI chatbot development China presents a rich site of study because it embodies the extremes of surveillance and censorship. This is all the more important as China have elevated disruptive technologies like AI and big data as critical part of state security and a key component to fulfilling the “Chinese Dream of National Rejuvenation.” Whether it is the implementation of a national “social credit” system or the ubiquitous use facial recognition systems, much of Western fears about data security and state control have been already realized in China. Yet, this also implies China is at the frontlines of potential points of resistance and fissures against the party–state–corporate machine. In doing so, I not only seek to raise questions dealing with the limits of our humanity in the light of our AI-driven futures but also present methodological concerns related to human–machine interfacing in conceptualizing new modes of resistance

    More of the Same – On Spotify Radio

    Get PDF

    Can online music platforms be fair? An interdisciplinary research manifesto

    Get PDF
    In this article we present a manifesto for research into the complex interplay between social media, music streaming services, and their algorithms, which are reshaping the European music industry—a sector that has transitioned from ownership to access-based models. Our focus is to assess whether the current digital economy supports a fair and sustainable development for cultural and creative industries. The manifesto is designed to pave the way for a comprehensive analysis. We begin with the context of our research by briefly examining the de-materialization of the music industry and the critical role of proprietary algorithms in organizing and ranking creative works. We then scrutinize the notion of 'fairness' within digital markets, a concept that is attracting increasing policy interest in the EU. We believe that, for 'fairness' to be effective, the main inquiry around this concept – especially as regards remuneration of content creators - shall be necessarily interdisciplinary. This presupposes collaboration across complementary fields to address gaps and inconsistencies in understanding how these platforms influence music creation and consumption and whether these environments and technologies should be regulated. We outline how interdisciplinary expertise (political science, law, economics, and computer science) can enhance the current understanding of 'fairness' within Europe's cultural policies and help address policy challenges. The article details how our research plan will unfold across various disciplinary hubs, culminating in the integration of their findings to produce the ‘key exploitable results’ of a Horizon Europe project (Fair MusE) that aims to explore challenges and opportunities of today’s digital music landscape

    SpotiBot : Turing Testing Spotify

    No full text
    Even if digitized and born-digital audiovisual material today amounts to a steadily increasing body of data to work with and research, such media modalities are still relatively poorly represented in the field of DH. Streaming media is a case in point, and the purpose of this article is to provide some findings from an ongoing audio (and music) research project, that deals with experiments, interventions and the reverse engineering of Spotify’s algorithms, aggregation procedures, and valuation strategies. One such research experiment, the SpotiBot intervention, was set up at Humlab, Umeå University. Via multiple bots running in parallel our idea was to examine if it is possible to provoke — or even undermine — the Spotify business model (based on the so called “30 second royalty rule”). Essentially, the experiment resembled a Turing test, where we asked ourselves what happens when — not if — streaming bots approximate human listener behavior in such a way that it becomes impossible to distinguish between a human and a machine. Implemented in the Python programming language, and using a web UI testing frameworks, our so called SpotiBot engine automated the Spotify web client by simulating user interaction within the web interface. The SpotiBot engine was instructed to play a single track repeatedly (both self-produced music and Abba’s “Dancing Queen”), during less and more than 30 seconds, and with a fixed repetition scheme running from 100 to n times (simultaneously with different Spotify Free ‘bot accounts’). Our bots also logged all results. In short, our bots demonstrated the ability (at least sometimes) to continuously play tracks, indicating that the Spotify business model can be tampered with. Using a single virtual machine — hidden behind only one proxy IP — the results of the intervention hence stipulate that it is possible to automatically play tracks for thousands of repetitions that exceeds the royalty rule

    SpotiBot : Turing Testing Spotify

    No full text
    Even if digitized and born-digital audiovisual material today amounts to a steadily increasing body of data to work with and research, such media modalities are still relatively poorly represented in the field of DH. Streaming media is a case in point, and the purpose of this article is to provide some findings from an ongoing audio (and music) research project, that deals with experiments, interventions and the reverse engineering of Spotify’s algorithms, aggregation procedures, and valuation strategies. One such research experiment, the SpotiBot intervention, was set up at Humlab, Umeå University. Via multiple bots running in parallel our idea was to examine if it is possible to provoke — or even undermine — the Spotify business model (based on the so called “30 second royalty rule”). Essentially, the experiment resembled a Turing test, where we asked ourselves what happens when — not if — streaming bots approximate human listener behavior in such a way that it becomes impossible to distinguish between a human and a machine. Implemented in the Python programming language, and using a web UI testing frameworks, our so called SpotiBot engine automated the Spotify web client by simulating user interaction within the web interface. The SpotiBot engine was instructed to play a single track repeatedly (both self-produced music and Abba’s “Dancing Queen”), during less and more than 30 seconds, and with a fixed repetition scheme running from 100 to n times (simultaneously with different Spotify Free ‘bot accounts’). Our bots also logged all results. In short, our bots demonstrated the ability (at least sometimes) to continuously play tracks, indicating that the Spotify business model can be tampered with. Using a single virtual machine — hidden behind only one proxy IP — the results of the intervention hence stipulate that it is possible to automatically play tracks for thousands of repetitions that exceeds the royalty rule
    corecore