22 research outputs found

    From Shadow Profiles to Contact Tracing: Qualitative Research into Consent and Privacy

    Full text link
    For many privacy scholars, consent is on life support, if not dead. In July 2020, we held six focus groups in Australia to test this claim by gauging attitudes to consent and privacy, with a spotlight on smartphones. These focus groups included discussion of four case studies: ‘shadow profiles’, eavesdropping by companies on smartphone users, non-consensual government surveillance of its citizens and contact tracing apps developed to combat COVID-19. Our participants expressed concerns about these practices and said they valued individual consent and saw it as a key element of privacy protection. However, they saw the limits of individual consent, saying that the law and the design of digital services also have key roles to play. Building on these findings, we argue for a blend of good law, good design and an appreciation that individual consent is still valued and must be fixed rather than discarded - ideally in ways that are also collective. In other words, consent is dead; long live consent.</jats:p

    #ArsonEmergency and Australia's "Black Summer": Polarisation and misinformation on social media

    Full text link
    During the summer of 2019-20, while Australia suffered unprecedented bushfires across the country, false narratives regarding arson and limited backburning spread quickly on Twitter, particularly using the hashtag #ArsonEmergency. Misinformation and bot- and troll-like behaviour were detected and reported by social media researchers and the news soon reached mainstream media. This paper examines the communication and behaviour of two polarised online communities before and after news of the misinformation became public knowledge. Specifically, the Supporter community actively engaged with others to spread the hashtag, using a variety of news sources pushing the arson narrative, while the Opposer community engaged less, retweeted more, and focused its use of URLs to link to mainstream sources, debunking the narratives and exposing the anomalous behaviour. This influenced the content of the broader discussion. Bot analysis revealed the active accounts were predominantly human, but behavioural and content analysis suggests Supporters engaged in trolling, though both communities used aggressive language.Comment: 16 pages, 8 images, presented at the 2nd Multidisciplinary International Symposium on Disinformation in Open Online Media (MISDOOM 2020), Leiden, The Netherlands. Published in: van Duijn M., Preuss M., Spaiser V., Takes F., Verberne S. (eds) Disinformation in Open Online Media. MISDOOM 2020. Lecture Notes in Computer Science, vol 12259. Springer, Cham. https://doi.org/10.1007/978-3-030-61841-4_1

    Web Routineness and Limits of Predictability: Investigating Demographic and Behavioral Differences Using Web Tracking Data

    Full text link
    Understanding human activities and movements on the Web is not only important for computational social scientists but can also offer valuable guidance for the design of online systems for recommendations, caching, advertising, and personalization. In this work, we demonstrate that people tend to follow routines on the Web, and these repetitive patterns of web visits increase their browsing behavior's achievable predictability. We present an information-theoretic framework for measuring the uncertainty and theoretical limits of predictability of human mobility on the Web. We systematically assess the impact of different design decisions on the measurement. We apply the framework to a web tracking dataset of German internet users. Our empirical results highlight that individual's routines on the Web make their browsing behavior predictable to 85% on average, though the value varies across individuals. We observe that these differences in the users' predictabilities can be explained to some extent by their demographic and behavioral attributes.Comment: 12 pages, 8 figures. To be published in the proceedings of the International AAAI Conference on Web and Social Media (ICWSM) 202

    Diverse Misinformation: Impacts of Human Biases on Detection of Deepfakes on Networks

    Full text link
    Social media platforms often assume that users can self-correct against misinformation. However, social media users are not equally susceptible to all misinformation as their biases influence what types of misinformation might thrive and who might be at risk. We call "diverse misinformation" the complex relationships between human biases and demographics represented in misinformation. To investigate how users' biases impact their susceptibility and their ability to correct each other, we analyze classification of deepfakes as a type of diverse misinformation. We chose deepfakes as a case study for three reasons: 1) their classification as misinformation is more objective; 2) we can control the demographics of the personas presented; 3) deepfakes are a real-world concern with associated harms that must be better understood. Our paper presents an observational survey (N=2,016) where participants are exposed to videos and asked questions about their attributes, not knowing some might be deepfakes. Our analysis investigates the extent to which different users are duped and which perceived demographics of deepfake personas tend to mislead. We find that accuracy varies by demographics, and participants are generally better at classifying videos that match them. We extrapolate from these results to understand the potential population-level impacts of these biases using a mathematical model of the interplay between diverse misinformation and crowd correction. Our model suggests that diverse contacts might provide "herd correction" where friends can protect each other. Altogether, human biases and the attributes of misinformation matter greatly, but having a diverse social group may help reduce susceptibility to misinformation.Comment: Supplementary appendix available upon request for the time bein

    “It wouldn't happen to me”:Privacy concerns and perspectives following the Cambridge Analytica scandal

    Get PDF
    In March 2018, news of the Facebook-Cambridge Analytica scandal made headlines around the world. By inappropriately collecting data from approximately 87 million users’ Facebook profiles, the data analytics company, Cambridge Analytica, created psychographically tailored advertisements that allegedly aimed to influence people's voting preferences in the 2016 US presidential election. In the aftermath of this incident, we conducted a series of semi-structured interviews with 30 participants based at a UK university, discussing their understanding of online privacy and how they manage it in the wake of the scandal. We analysed this data using an inductive (i.e. ‘bottom-up’) thematic analysis approach. Contrary to many opinions reported in the news, the respondents in our sample did not delete their accounts, frantically change their privacy settings, or even express that much concern. As a result, individuals often consider themselves immune to psychographically tailored advertisements, and lack understanding of how automated approaches and algorithms work in relation to their (and their networks’) personal data. We discuss our findings in relation to wider related research (e.g. crisis fatigue, networked privacy, Protection Motivation Theory) and discuss directions for future research.</p
    corecore