266 research outputs found

    Digital Ethnography Redux: Interpreting Drone Cultures and Microtargeting in an era of Digital Transformation

    Full text link
    [EN] This paper affirms and demonstrates the application of digital ethnography methodologies to two digitally transformative phenomena that are fundamentally enmeshed in the public sphere: personal drones and microtargeting. We review recent methodological studies on digital ethnography that can be delineated into three forms: research that is online or remote by necessity because of physical distance between researcher and participants; research that uses natively digital tools to study phenomena (Rogers 2013; Fish 2019) and research focused on digital cultures (Markham 2020). Our application of digital ethnography is further informed by qualitative ethnographic research undertaken by Horst, Pink, Postill and Hjorth (Horst, et al., 2016); and Manovich’s work on the application of digital ethnography to examine automation and big data (Manovich & Arielli, 2022). Beesley (forthcoming) utilises longitudinal visual ethnography as a lens to understand consumer drone cultures and disentangle the multiple narratives surrounding these disruptive technologies. Mount (2020), utilised digital ethnography to review two decades of microtargeting activities, employed by Strategic Communication Laboratories and Cambridge Analytica, to influence electoral behaviour. This methodological research will be combined with our conceptual swarm hermeneutics framework (Mount & Beesley, 2022) to develop scenario based simulations that will further evaluate interpretive schemas and behaviours.Mount, G.; Beesley, D. (2022). Digital Ethnography Redux: Interpreting Drone Cultures and Microtargeting in an era of Digital Transformation. En 4th International Conference on Advanced Research Methods and Analytics (CARMA 2022). Editorial Universitat Politècnica de València. 181-188. https://doi.org/10.4995/CARMA2022.2022.1508318118

    Politische Maschinen: Maschinelles Lernen für das Verständnis von sozialen Maschinen

    Get PDF
    This thesis investigates human-algorithm interactions in sociotechnological ecosystems. Specifically, it applies machine learning and statistical methods to uncover political dimensions of algorithmic influence in social media platforms and automated decision making systems. Based on the results, the study discusses the legal, political and ethical consequences of algorithmic implementations.Diese Arbeit untersucht Mensch-Algorithmen-Interaktionen in sozio-technologischen Ă–kosystemen. Sie wendet maschinelles Lernen und statistische Methoden an, um politische Dimensionen des algorithmischen Einflusses auf Socialen Medien und automatisierten Entscheidungssystemen aufzudecken. Aufgrund der Ergebnisse diskutiert die Studie die rechtlichen, politischen und ethischen Konsequenzen von algorithmischen Anwendungen

    Information Fiduciaries and Political Microtargeting: A Legal Framework for Regulating Political Advertising on Digital Platforms

    Get PDF
    Digital technologies have taken individualized advertising to an unprecedented level. But the convenience and efficiency of such highly tailored content comes at a high price: unbridled access to our personal data. The rise of sophisticated data-driven practices, otherwise known as “Big Data,” enables large datasets to be analyzed in ways that reveal useful patterns about human behavior. Thanks to these novel analytical techniques, businesses can cater to individual consumer needs better than ever before. Yet the opportunities presented by Big Data pose new ethical challenges. Significant scholarly research has examined algorithmic discrimination and consumer manipulation, as well as the ways that data-driven practices undermine our democratic system by dramatically altering the news ecosystem. Current scholarship has especially focused on the ways illegitimate foreign and domestic operatives exploit the advertising tools of digital platforms to spread fake and divisive messages to those most susceptible to influence. However, more scholarly attention should be devoted to how these digital technologies are exploited by legitimate political actors, such as politicians and campaigns, to win elections. By combining data-driven voter research with personalized advertising, political actors engage in political microtargeting, directing communications at niche audiences. Political microtargeting fits within a broader conversation about data-privacy regulation, as individuals lack sufficient control over how digital companies handle their personal data. The First Amendment currently limits data-privacy reform, so any meaningful changes must reconcile data privacy with the First Amendment. Professor Jack Balkin has argued that online service providers should be defined as “information fiduciaries,” or businesses that, because of their relationship with another, have taken on special duties with respect to the information they obtain in the course of the relationship. Because online service providers receive sensitive information from their end users, Professor Balkin argues they should be subject to additional regulation. Treating online service providers as information fiduciaries provides a viable means to reconcile the First Amendment with data-privacy regulation: the First Amendment has not prevented the state or federal government from regulating how certain professionals, such as doctors and lawyers, interact with their clients and use their personal information because these professionals share a fiduciary relationship with their clients. Therefore, consistent with the First Amendment, the government should also be able to subject online service providers to reasonable restrictions on their handling of end-user data. This Note expands Professor Balkin’s information-fiduciary framework by arguing that federal legislation should place fiduciary duties on online service providers. In doing so, it responds to scholarly critiques of Professor Balkin’s theory, particularly the criticism that he failed to show how information fiduciaries might function in practice. Using political microtargeting on Facebook as an example, this Note spells out the ways that fiduciary duties might be enforced. This Note argues that holding Facebook and other digital platforms that engage in political advertising to an information-fiduciary standard would ameliorate some of the adverse effects of political microtargeting and promote electoral integrity in the digital age

    A CONSERVATION MARKETING TOOLKIT: SYSTEMATIC LITERATURE MAPPING, MICROTARGETING CONSERVATION EASEMENTS, AND CONSERVATION CORRIDOR PRIORITIZATION

    Get PDF
    In a changing world with limited resources for conservation efforts, conservationists, wildlife managers, and land managers must look for creative ways to realize conservation goals. A new wave of conservationists is investigating how other disciplines, namely psychology and marketing, might improve our ability to understand and change conservation-related human behavior. In this thesis, I review existing applications of “conservation marketing” and apply a subset to advance two specific conservation challenges. In Chapter 1, I present a systematic mapping of the conservation marketing literature to understand the lay of the land in how conservationists have already applied marketing techniques to conservation, and where the gaps and opportunities seem ripe for future research. In Chapter 2, I employ one specific marketing technique, microtargeting, to help advance efforts to secure conservation easements on private land. Using a suite of modeling and analysis techniques to estimate landowners’ willingness to participate in a conservation easement, I was able to nearly double easement predictive power over random. In Chapter 3, I apply these willingness scores to advance a contemporary conservation issue: conservation corridor prioritization. Specifically, I use the easement propensity scores derived from Chapter 2’s model results to evaluate three proposed conservation corridors for grizzly bear (Ursus arctos horribilis) migration between two isolated habitats in Western Montana. With this study, I hope to enhance the ways conservationists understand and use marketing techniques to achieve conservation goals more efficiently and effectively

    This Is Your Brain on Facebook: An Analytical Approach to Social Media and Autonomy

    Get PDF
    This paper examines how data mining and microtargeting on Facebook undermine its users’ autonomy, that is their capacity to make their own free and deliberative choices. Facebook profiles a user by creating a psychological model of their preferences (beliefs and values). It then frames the content that appears in a user\u27s News Feed on the basis of this model and thus shapes their preferences. This shaping of preferences has implications for the user’s autonomy. To show this I will apply John Christman’s conception of autonomy that focuses on how external covert influence can interfere with a user’s independent process of preference formation. I will prove that Facebook’s model of microtargeting users with curated and restricted options of content can make them adapt their preference. Adaption of preference due to external manipulation outside of a user’s control undermines their autonomy. In November 2016 Donald Trump won the United States Presidential election. Partially with the help of Cambridge Analytica (British consultancy firm) and Facebook’s use of computational methods to assess large data sets about voters and create their psychological profile to target them with political ads. The effective microtargeting strategy relied on restricting the content a user could consider for forming their preference. If a user cannot form their preference with complete control over what options to consider, then that undermines their ability to reason autonomously

    EVALUATING ARTIFICIAL INTELLIGENCE FOR OPERATIONS IN THE INFORMATION ENVIRONMENT

    Get PDF
    Recent advances in artificial intelligence (AI) portend a future of accelerated information cycles and intensified technology diffusion. As AI applications become increasingly prevalent and complex, Special Operations Forces (SOF) face the challenge of discerning which tools most effectively address operational needs and generate an advantage in the information environment. Yet, SOF currently lack an end user–focused evaluation framework that could assist information practitioners in determining the operational value of an AI tool. This thesis proposes a practitioner’s evaluation framework (PEF) to address the question of how SOF should evaluate AI technologies to conduct operations in the information environment (OIE). The PEF evaluates AI technologies through the perspective of the information practitioner who is familiar with the mission, the operational requirements, and OIE processes but has limited to no technical knowledge of AI. The PEF consists of a four-phased approach—prepare, design, conduct, recommend—that assesses nine evaluation domains: mission/task alignment; data; system/model performance; user experience; sustainability; scalability; affordability; ethical, legal, and policy considerations; and vendor assessment. By evaluating AI through a more structured, methodical approach, the PEF enables SOF to identify, assess, and prioritize AI-enabled tools for OIE.Outstanding ThesisMajor, United States ArmyApproved for public release. Distribution is unlimited

    Ad Tech & the Future of Legal Ethics

    Get PDF
    Privacy scholars have extensively studied online behavioral advertising, which uses Big Data to target individuals based on their characteristics and behaviors. This literature identifies several new risks presented by online behavioral advertising and theorizes about how consumer protection law should respond. A new wave of this scholarship contemplates applying fiduciary duties to information-collecting entities like Facebook and Google.Meanwhile, lawyers—quintessential fiduciaries—already use online behavioral advertising to find clients. For example, a medical malpractice firm directs its advertising to Facebook users who are near nursing homes with bad reviews. And, in 2020, New York became the first jurisdiction to approve lawyers’ use of retargeting, one form of online behavioral advertising. But the professional responsibility scholarship has not yet considered these developments.The Article describes the rise of online behavioral advertising and lawyers’ nascent use. It draws on modern privacy scholarship to explain how this advertising method can lead to privacy invasions and manipulation. It then explores the specific case of lawyer advertising. And it critiques the existing regulations, which do not prohibit tactics involving privacy invasions or manipulation even though they undermine client autonomy—a key concern for the law of lawyer marketing.In addition to this descriptive and doctrinal work, the Article makes two other contributions. First, the examination of online behavioral advertising helps explain why the legal profession struggles to integrate new technological innovations more generally. AI tools and similar products are driven by informational capitalism’s focus on exploiting knowledge advantages, its speed, and its scale. But these features all are in tension with traditional aspects of the fiduciary relationship between lawyers and their clients. Second, as privacy scholars begin to think about how the duty of loyalty might provide a principle to limit abuses of Big Data in other contexts, the Article proposes that lawyers—who already have this duty—make good subjects for a case study
    • …
    corecore