40 research outputs found

    Designing Human-Centered Algorithms for the Public Sector: A Case Study of the U.S. Child-Welfare System

    Get PDF
    The U.S. Child Welfare System (CWS) is increasingly seeking to emulate business models of the private sector centered in efficiency, cost reduction, and innovation through the adoption of algorithms. These data-driven systems purportedly improve decision-making, however, the public sector poses its own set of challenges with respect to the technical, theoretical, cultural, and societal implications of algorithmic decision-making. To fill these gaps, my dissertation comprises four studies that examine: 1) how caseworkers interact with algorithms in their day-to-day discretionary work, 2) the impact of algorithmic decision-making on the nature of practice, organization, and street-level decision-making, 3) how casenotes can help unpack patterns of invisible labor and contextualize decision-making processes, and 4) how casenotes can help uncover deeper systemic constraints and risk factors that are hard to quantify but directly impact families and street-level decision-making. My goal for this research is to investigate systemic disparities and design and develop algorithmic systems that are centered in the theory of practice and improve the quality of human discretionary work. These studies have provided actionable steps for human-centered algorithm design in the public sector

    Building Bridges: Generative Artworks to Explore AI Ethics

    Full text link
    In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society. Across academia, industry, and government bodies, a variety of endeavours are being pursued towards enhancing AI ethics. A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests. These different perspectives are often not understood, due in part to communication gaps.For example, AI researchers who design and develop AI models are not necessarily aware of the instability induced in consumers' lives by the compounded effects of AI decisions. Educating different stakeholders about their roles and responsibilities in the broader context becomes necessary. In this position paper, we outline some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools for surfacing different perspectives. We hope to spark interdisciplinary discussions about computational creativity broadly as a tool for enhancing AI ethics

    A computational approach to analyzing and detecting trans-exclusionary radical feminists (TERFs) on Twitter

    Get PDF
    Within the realm of abusive content detection for social media, little research has been conducted on the transphobic hate group known as trans-exclusionary radical feminists (TERFs). The community engages in harmful behaviors such as targeted harassment of transgender people on Twitter, and perpetuates transphobic rhetoric such as denial of trans existence under the guise of feminism. This thesis analyzes the network of the TERF community on Twitter, by discovering several sub-communities as well as modeling the topics of their tweets. We also introduce TERFSPOT, a classifier for predicting whether a Twitter user is a TERF or not, based on a combination of network and textual features. The contributions of this work are twofold: we conduct the first large-scale computational analysis of the TERF hate group on Twitter, and demonstrate a classifier with a 90% accuracy for identifying TERFs

    Responsible AI Research Needs Impact Statements Too

    Full text link
    All types of research, development, and policy work can have unintended, adverse consequences - work in responsible artificial intelligence (RAI), ethical AI, or ethics in AI is no exception

    POTATO: The Portable Text Annotation Tool

    Full text link
    We present POTATO, the Portable text annotation tool, a free, fully open-sourced annotation system that 1) supports labeling many types of text and multimodal data; 2) offers easy-to-configure features to maximize the productivity of both deployers and annotators (convenient templates for common ML/NLP tasks, active learning, keypress shortcuts, keyword highlights, tooltips); and 3) supports a high degree of customization (editable UI, inserting pre-screening questions, attention and qualification tests). Experiments over two annotation tasks suggest that POTATO improves labeling speed through its specially-designed productivity features, especially for long documents and complex tasks. POTATO is available at https://github.com/davidjurgens/potato and will continue to be updated.Comment: EMNLP 2022 DEM

    Riveter: Measuring Power and Social Dynamics Between Entities

    Full text link
    Riveter provides a complete easy-to-use pipeline for analyzing verb connotations associated with entities in text corpora. We prepopulate the package with connotation frames of sentiment, power, and agency, which have demonstrated usefulness for capturing social phenomena, such as gender bias, in a broad range of corpora. For decades, lexical frameworks have been foundational tools in computational social science, digital humanities, and natural language processing, facilitating multifaceted analysis of text corpora. But working with verb-centric lexica specifically requires natural language processing skills, reducing their accessibility to other researchers. By organizing the language processing pipeline, providing complete lexicon scores and visualizations for all entities in a corpus, and providing functionality for users to target specific research questions, Riveter greatly improves the accessibility of verb lexica and can facilitate a broad range of future research

    Leveraging Digital Intelligence for Community Well-Being

    Get PDF
    The world of information is mediated by digital technologies, and the growing influence of Artificial Intelligence (AI) on society, through its involvement in everyday life, is likely to present issues with lasting consequences. In the context of improving community well-being using AI, the knowledge, insights, and impressions or analysis required for activating such improvement necessitate a frame of reference. This frame needs to take into account how well-being is understood within the current paradigm of technological innovation as a driver of economic growth. The evaluation of well-being, often defined as an individual’s cognitive and affective assessment of life, takes into account emotional reaction to events based on how satisfaction and fulfillment are discerned. It is a dynamic concept that involves subjective, social, and psychological dimensions, along with a state of being where human needs are met and one can act meaningfully, thus highlighting a relational element underlying social and community well-being. Transitions from a predominantly industrial society towards one that is information-led demand a strategic social design for AI. This article evaluates how well-being is understood within the current paradigm to offer a framework for leveraging AI for community well-being.© The Author(s) 2020. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriatecreditto theoriginalauthor(s) andthesource,providealink totheCreativeCommons licence,and indicate if changes were made. The images or other third party material in this article are included in the article's CreativeCommons licence,unless indicated otherwise ina creditline to thematerial.Ifmaterialis not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use,youwill need to obtain permissiondirectly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.fi=vertaisarvioitu|en=peerReviewed

    Algorithmic discrimination at work

    Get PDF
    The potential for algorithms to discriminate is now well-documented, and algorithmic management tools are no exception. Scholars have been quick to point to gaps in the equality law framework, but existing European law is remarkably robust. Where gaps do exist, they largely predate algorithmic decision-making. Careful judicial reasoning can resolve what appear to be novel legal issues; and policymakers should seek to reinforce European equality law, rather than reform it. This article disentangles some of the knottiest questions on the application of the prohibition on direct and indirect discrimination to algorithmic management, from how the law should deal with arguments that algorithms are ‘more accurate’ or ‘less biased’ than human decision-makers, to the attribution of liability in the employment context. By identifying possible routes for judicial resolution, the article demonstrates the adaptable nature of existing legal obligations. The duty to make reasonable accommodations in the disability context is also examined, and options for combining top-level and individualised adjustments are explored. The article concludes by turning to enforceability. Algorithmic discrimination gives rise to a concerning paradox: on the one hand, automating previously human decision-making processes can render discriminatory criteria more traceable and outcomes more quantifiable. On the other hand, algorithmic decision-making processes are rarely transparent, and scholars consistently point to algorithmic opacity as the key barrier to litigation and enforcement action. Judicial and legislative routes to greater transparency are explored
    corecore