44 research outputs found

    Cracking Open the Black Box of Genetic Ancestry Testing

    Get PDF
    Stormfront, a well-known online forum for white nationalists, is a place for discussions about race, nation, and biology. We analyzed how members shared and discussed genetic ancestry tests (GATs), which revealed a complicated network of boundary maintenance, identity formation and justification, and biosociality within this online community. Using selection of seventy Stormfront threads discussing GAT results, this study employs primarily digital ethnographic methods to investigate how white nationalists navigate questions of self and community online. Using scientific concepts, genetic data, and multiple databases, white nationalists rely on the ambiguity of genetics and the black boxing of algorithms provided by testing companies to redefine white identity while also remaining committed to biologically-informed conceptions of race. This research raises important questions about the role of scientific data in racial formations

    An Empirical Analysis of Racial Categories in the Algorithmic Fairness Literature

    Full text link
    Recent work in algorithmic fairness has highlighted the challenge of defining racial categories for the purposes of anti-discrimination. These challenges are not new but have previously fallen to the state, which enacts race through government statistics, policies, and evidentiary standards in anti-discrimination law. Drawing on the history of state race-making, we examine how longstanding questions about the nature of race and discrimination appear within the algorithmic fairness literature. Through a content analysis of 60 papers published at FAccT between 2018 and 2020, we analyze how race is conceptualized and formalized in algorithmic fairness frameworks. We note that differing notions of race are adopted inconsistently, at times even within a single analysis. We also explore the institutional influences and values associated with these choices. While we find that categories used in algorithmic fairness work often echo legal frameworks, we demonstrate that values from academic computer science play an equally important role in the construction of racial categories. Finally, we examine the reasoning behind different operationalizations of race, finding that few papers explicitly describe their choices and even fewer justify them. We argue that the construction of racial categories is a value-laden process with significant social and political consequences for the project of algorithmic fairness. The widespread lack of justification around the operationalization of race reflects institutional norms that allow these political decisions to remain obscured within the backstage of knowledge production.Comment: 13 pages, 2 figures, FAccT '2

    Jupyter notebooks as discovery mechanisms for open science: Citation practices in the astronomy community

    Full text link
    Citing data and software is a means to give scholarly credit and to facilitate access to research objects. Citation principles encourage authors to provide full descriptions of objects, with stable links, in their papers. As Jupyter notebooks aggregate data, software, and other objects, they may facilitate or hinder citation, credit, and access to data and software. We report on a study of references to Jupyter notebooks in astronomy over a 5-year period (2014-2018). References increased rapidly, but fewer than half of the references led to Jupyter notebooks that could be located and opened. Jupyter notebooks appear better suited to supporting the research process than to providing access to research objects. We recommend that authors cite individual data and software objects, and that they stabilize any notebooks cited in publications. Publishers should increase the number of citations allowed in papers and employ descriptive metadata-rich citation styles that facilitate credit and discovery

    The conundrum of police officer-involved homicides:Counter-data in Los Angeles County

    Get PDF
    This paper draws from critical data studies and related fields to investigate police officer-involved homicide data for Los Angeles County. We frame police officer-involved homicide data as a rhetorical tool that can reify certain assumptions about the world and extend regimes of power. We highlight the possibility that this type of sensitive civic data can be investigated and employed within local communities through creative practice. Community involvement with data can create a countervailing force to powerful dominant narratives and supplement activist projects that hold local officials accountable for their actions. Our analysis examines four Los Angeles County police officer-involved homicide data sets. First, we provide accounts of the semantics, granularity, scale and transparency of this local data. Then, we describe a “counter data action,” an event that invited members of the community to identify the limits and challenges present in police officer-involved homicide data and to propose new methods for deriving meaning from these indicators and statistics

    Police Officer-Involved Homicide Database Project

    Get PDF
    Our project explores un- and under-reported incidents of law enforcement-involved homicides, both justified and unjustified, through an analysis of extant federal and local databases with information pertaining to police officer-involved homicides, combined with mining and analysis of social media data and participatory action research methods to fill gaps in existing government and local databases. The social media information can be used in concert with other publicly available government databases to create a clearer picture of the lived realities of communities encountering police homicides in the United States. We have chosen Los Angeles County as the first community to study.ye

    Managing Evidence in Food Safety and Nutrition

    Get PDF
    Evidence ('data') is at the heart of EFSA's 2020 Strategy and is addressed in three of its operational objectives: (1) adopt an open data approach, (2) improve data interoperability to facilitate data exchange, and (3) migrate towards structured scientific data. As the generation and availability of data have increased exponentially in the last decade, potentially providing a much larger evidence base for risk assessments, it is envisaged that the acquisition and management of evidence to support future food safety risk assessments will be a dominant feature of EFSA's future strategy. During the breakout session on 'Managing evidence' of EFSA's third Scientific Conference 'Science, Food, Society', current challenges and future developments were discussed in evidence management applied to food safety risk assessment, accounting for the increased volume of evidence available as well as the increased IT capabilities to access and analyse it. This paper reports on presentations given and discussions held during the session, which were centred around the following three main topics: (1) (big) data availability and (big) data connection, (2) problem formulation and (3) evidence integration. (C) 2019 European Food Safety Authority. EFSA Journal published by John Wiley and Sons Ltd on behalf of European Food Safety Authority
    corecore