11 research outputs found

    Investigating Algorithm Review Boards for Organizational Responsible Artificial Intelligence Governance

    Full text link
    Organizations including companies, nonprofits, governments, and academic institutions are increasingly developing, deploying, and utilizing artificial intelligence (AI) tools. Responsible AI (RAI) governance approaches at organizations have emerged as important mechanisms to address potential AI risks and harms. In this work, we interviewed 17 technical contributors across organization types (Academic, Government, Industry, Nonprofit) and sectors (Finance, Health, Tech, Other) about their experiences with internal RAI governance. Our findings illuminated the variety of organizational definitions of RAI and accompanying internal governance approaches. We summarized the first detailed findings on algorithm review boards (ARBs) and similar review committees in practice, including their membership, scope, and measures of success. We confirmed known robust model governance in finance sectors and revealed extensive algorithm and AI governance with ARB-like review boards in health sectors. Our findings contradict the idea that Institutional Review Boards alone are sufficient for algorithm governance and posit that ARBs are among the more impactful internal RAI governance approaches. Our results suggest that integration with existing internal regulatory approaches and leadership buy-in are among the most important attributes for success and that financial tensions are the greatest challenge to effective organizational RAI. We make a variety of suggestions for how organizational partners can learn from these findings when building their own internal RAI frameworks. We outline future directions for developing and measuring effectiveness of ARBs and other internal RAI governance approaches

    Reproducibility

    Get PDF
    Science is allegedly in the midst of a reproducibility crisis, but questions of reproducibility and related principles date back nearly 80 years. Numerous controversies have arisen, especially since 2010, in a wide array of disciplines that stem from the failure to reproduce studies or their findings:biology, biomedical and preclinical research, business and organizational studies, computational sciences, drug discovery, economics, education, epidemiology and statistics, genetics, immunology, policy research, political science, psychology, and sociology. This monograph defines terms and constructs related to reproducible research, weighs key considerations and challenges in reproducing or replicating studies, and discusses transparency in publications that can support reproducible research goals. It attempts to clarify reproducible research, with its attendant (andconfusing or even conflicting) lexicon and aims to provide useful background, definitions, and practical guidance for all readers. Among its conclusions: First, researchers must become better educated about these issues, particularly the differences between the concepts and terms. The main benefit is being able to communicate clearly within their own fields and, more importantly, across multiple disciplines. In addition, scientists need to embrace these concepts as part of their responsibilities as good stewards of research funding and as providers of credible information for policy decision making across many areas of public concern. Finally, although focusing on transparency and documentation is essential, ultimately the goal is achieving the most rigorous, high-quality science possible given limitations on time, funding, or other resources.Publishe

    Reproducibility: A primer on semantics and implications for research

    Get PDF
    Science is allegedly in the midst of a reproducibility crisis, but questions of reproducibility and related principles date back nearly 80 years. Numerous controversies have arisen, especially since 2010, in a wide array of disciplines that stem from the failure to reproduce studies or their findings:biology, biomedical and preclinical research, business and organizational studies, computational sciences, drug discovery, economics, education, epidemiology and statistics, genetics, immunology, policy research, political science, psychology, and sociology. This monograph defines terms and constructs related to reproducible research, weighs key considerations and challenges in reproducing or replicating studies, and discusses transparency in publications that can support reproducible research goals. It attempts to clarify reproducible research, with its attendant (and confusing or even conflicting) lexicon and aims to provide useful background, definitions, and practical guidance for all readers. Among its conclusions: First, researchers must become better educated about these issues, particularly the differences between the concepts and terms. The main benefit is being able to communicate clearly within their own fields and, more importantly, across multiple disciplines. In addition, scientists need to embrace these concepts as part of their responsibilities as good stewards of research funding and as providers of credible information for policy decision making across many areas of public concern. Finally, although focusing on transparency and documentation is essential, ultimately the goal is achieving the most rigorous, high-quality science possible given limitations on time, funding, or other resources

    New science on the Open Science Grid

    Get PDF
    The Open Science Grid (OSG) includes work to enable new science, new scientists, and new modalities in support of computationally based research. There are frequently significant sociological and organizational changes required in transformation from the existing to the new. OSG leverages its deliverables to the large-scale physics experiment member communities to benefit new communities at all scales through activities in education, engagement, and the distributed facility. This paper gives both a brief general description and specific examples of new science enabled on the OSG. More information is available at the OSG web site: www.opensciencegrid.org

    The Open Science Grid Status and Architecture The Open Science Grid Executive Board on behalf of the OSG Consortium: Ruth Pordes, Don Petravick: Fermi National Accelerator Laboratory

    Get PDF
    Abstract. The Open Science Grid (OSG) provides a distributed facility where the Consortium members provide guaranteed and opportunistic access to shared computing and storage resources. The OSG project[1] is funded by the National Science Foundation and the Department of Energy Scientific Discovery through Advanced Computing program. The OSG project provides specific activities for the operation and evolution of the common infrastructure. The US ATLAS and US CMS collaborations contribute to and depend on OSG as the US infrastructure contributing to the World Wide LHC Computing Grid on which the LHC experiments distribute and analyze their data. Other stakeholders include the STAR RHIC experiment, the Laser Interferometer Gravitational-Wave Observatory (LIGO), the Dark Energy Survey (DES) and several Fermilab Tevatron experiments-CDF, D0, MiniBoone etc. The OSG implementation architecture brings a pragmatic approach to enabling vertically integrated community specific distributed systems over a common horizontal set of shared resources and services. More information can be found at the OSG web site: www.opensciencegrid.org
    corecore