174 research outputs found

    Telling Stories: On Culturally Responsive Artificial Intelligence

    No full text
    Deceptively simple in form, these original stories introduce and legitimate perspectives on AI spanning five continents. Individually and together, they open the reader to a deeper conversation about cultural responsiveness at a time of rapid, often unilateral technological change.https://digitalcommons.law.uw.edu/faculty-books/1067/thumbnail.jp

    Algorithmic Governance and Democratic Discourse

    No full text
    Edward W. Felton (Center for Information Technology Policy, Princeton), moderator ; Ryan Calo (U. of Washington Law), Hannah Bloch-Wehba (Texas A&M Law), Cary Coglianese (Penn Law), and Jennifer Raso (U. of Alberta Law), panelists

    Algorithmic Governance and Democratic Discourse

    No full text
    Edward W. Felton (Center for Information Technology Policy, Princeton), moderator ; Ryan Calo (U. of Washington Law), Hannah Bloch-Wehba (Texas A&M Law), Cary Coglianese (Penn Law), and Jennifer Raso (U. of Alberta Law), panelists

    Modeling Through

    No full text
    Theorists of justice have long imagined a decision-maker capable of acting wisely in every circumstance. Policymakers seldom live up to this ideal. They face well-understood limits, including an inability to anticipate the societal impacts of state intervention along a range of dimensions and values. Policymakers see around corners or address societal problems at their roots. When it comes to regulation and policy-setting, policymakers are often forced, in the memorable words of political economist Charles Lindblom, to “muddle through” as best they can. Powerful new affordances, from supercomputing to artificial intelligence, have arisen in the decades since Lindblom’s 1959 article that stand to enhance policymaking. Computer-aided modeling holds promise in delivering on the broader goals of forecasting and systems analysis developed in the 1970s, arming policymakers with the means to anticipate the impacts of state intervention along several lines—to model, instead of muddle. A few policymakers have already dipped a toe into these waters, others are being told that the water is warm. The prospect that economic, physical, and even social forces could be modeled by machines confronts policymakers with a paradox. Society may expect policymakers to avail themselves of techniques already usefully deployed in other sectors, especially where statutes or executive orders require the agency to anticipate the impact of new rules on particular values. At the same time, “modeling through” holds novel perils that policymakers may be ill equipped to address. Concerns include privacy, brittleness, and automation bias, all of which law and technology scholars are keenly aware. They also include the extension and deepening of the quantifying turn in governance, a process that obscures normative judgments and recognizes only that which the machines can see. The water may be warm, but there are sharks in it. These tensions are not new. And there is danger in hewing to the status quo. As modeling through gains traction, however, policymakers, constituents, and academic critics must remain vigilant. This being early days, American society is uniquely positioned to shape the transition from muddling to modeling

    Why Govern Broken Tools?

    Get PDF
    In Assessing the Governance of Digital Contact Tracing in Response to COVID-19: Results of a Multi-National Study, Brian Hutler et al. ably compare two approaches to the governance of digital contract tracing (DCT). In this brief essay, I want to examine to what extent governance actually played a meaningful role in the failure of DCT. If DCT failed primarily for other reasons, then the authors’ normative suggestion to pursue “a new governance approach … for designing and implementing DCT technology going forward” may be misplaced

    Modeling Through

    Get PDF
    Theorists of justice have long imagined a decision-maker capable of acting wisely in every circumstance. Policymakers seldom live up to this ideal. They face well-understood limits, including an inability to anticipate the societal impacts of state intervention along a range of dimensions and values. Policymakers cannot see around corners or address societal problems at their roots. When it comes to regulation and policy-setting, policymakers are often forced, in the memorable words of political economist Charles Lindblom, to “muddle through” as best they can. Powerful new affordances, from supercomputing to artificial intelligence, have arisen in the decades since Lindblom’s 1959 article that stand to enhance policymaking. Computer-aided modeling holds promise in delivering on the broader goals of forecasting and system analysis developed in the 1970s, arming policymakers with the means to anticipate the impacts of state intervention along several lines—to model, instead of muddle. A few policymakers have already dipped a toe into these waters, others are being told that the water is warm. The prospect that economic, physical, and even social forces could be modeled by machines confronts policymakers with a paradox. Society may expect policymakers to avail themselves of techniques already usefully deployed in other sectors, especially where statutes or executive orders require the agency to anticipate the impact of new rules on particular values. At the same time, “modeling through” holds novel perils that policymakers may be ill-equipped to address. Concerns include privacy, brittleness, and automation bias of which law and technology scholars are keenly aware. They also include the extension and deepening of the quantifying turn in governance, a process that obscures normative judgments and recognizes only that which the machines can see. The water may be warm but there are sharks in it. These tensions are not new. And there is danger in hewing to the status quo. (We should still pursue renewable energy even though wind turbines as presently configured waste energy and kill wildlife.) As modeling through gains traction, however, policymakers, constituents, and academic critics must remain vigilant. This being early days, American society is uniquely positioned to shape the transition from muddling to modeling

    Modeling Through

    Get PDF
    Theorists of justice have long imagined a decision-maker capable of acting wisely in every circumstance. Policymakers seldom live up to this ideal. They face well-understood limits, including an inability to anticipate the societal impacts of state intervention along a range of dimensions and values. Policymakers see around corners or address societal problems at their roots. When it comes to regulation and policy-setting, policymakers are often forced, in the memorable words of political economist Charles Lindblom, to “muddle through” as best they can. Powerful new affordances, from supercomputing to artificial intelligence, have arisen in the decades since Lindblom’s 1959 article that stand to enhance policymaking. Computer-aided modeling holds promise in delivering on the broader goals of forecasting and systems analysis developed in the 1970s, arming policymakers with the means to anticipate the impacts of state intervention along several lines—to model, instead of muddle. A few policymakers have already dipped a toe into these waters, others are being told that the water is warm. The prospect that economic, physical, and even social forces could be modeled by machines confronts policymakers with a paradox. Society may expect policymakers to avail themselves of techniques already usefully deployed in other sectors, especially where statutes or executive orders require the agency to anticipate the impact of new rules on particular values. At the same time, “modeling through” holds novel perils that policymakers may be ill equipped to address. Concerns include privacy, brittleness, and automation bias, all of which law and technology scholars are keenly aware. They also include the extension and deepening of the quantifying turn in governance, a process that obscures normative judgments and recognizes only that which the machines can see. The water may be warm, but there are sharks in it. These tensions are not new. And there is danger in hewing to the status quo. As modeling through gains traction, however, policymakers, constituents, and academic critics must remain vigilant. This being early days, American society is uniquely positioned to shape the transition from muddling to modeling

    Artificial Intelligence and the Carousel of Soft Law

    No full text
    The impulse of so many organizations across nearly every sector of society to promulgate principles in response to the ascendance of artificial intelligence is understandable and predictable. There is some utility in public commitments to universal values in the context of AI, and common principles can lay a foundation for societal change. But ultimately what is missing is not knowledge about the content of ethics as much as political will. If, as both detractors and proponents claim, AI constitutes the transformative technology of our time, then one of the aspects of society that must transform is the law and legal institutions

    The Automated Administrative State: A Crisis of Legitimacy

    Get PDF
    The legitimacy of the administrative state is premised on our faith in agency expertise. Despite their extra-constitutional structure, administrative agencies have been on firm footing for a long time in reverence to their critical role in governing a complex, evolving society. They are delegated enormous power because they respond expertly and nimbly to evolving conditions. In recent decades, state and federal agencies have embraced a novel mode of operation: automation. Agencies rely more and more on software and algorithms in carrying out their delegated responsibilities. The automated administrative state, however, is demonstrably riddled with concerns. Legal challenges regarding the denial of benefits and rights from travel to disability-have revealed a pernicious pattern of bizarre and unintelligible outcomes. Scholarship to date has explored the pitfalls of automation with a particular frame, asking how we might ensure that automation honors existing legal commitments such as due process. Missing from the conversation are broader, structural critiques of the legitimacy of agencies that automate. Automation abdicates the expertise and nimbleness that justify the administrative state, undermining the very case for the existence and authority of agencies. Yet the answer is not to deny agencies access to technology that other twenty-first century institutions reply upon. This Article points toward a positive vision of the administrative state that adopts tools only when they enhance, rather than undermine, the underpinnings of agency legitimacy

    How Do You Solve a Problem Like Misinformation?

    No full text
    Understanding key distinctions between misinformation/disinformation, speech/action, and mistaken belief/conviction provides an opportunity to expand research and policy toward more constructive online communication
    • …
    corecore