90 research outputs found

    Artificial Intelligence: Too Fragile to Fight?

    Get PDF
    The article of record may be found at https://www.usni.org/magazines/proceedings/2022/february/artificial-intelligence-too-fragile-fightInformation Warfare Essay Contest - First PrizeArtificial intelligence (Al) has become the technical focal point for advancing naval and Department of Defense (DoD) capabilities. Secretary of the Navy Carlos Del Toro listed AI first among his priorities for innovating U.S. naval forces. Chief of Naval Operations Admiral Michael Gilday listed it as his top priority during his Senate confirmation hearing. This focus is appropriate: ai/ offers many promising breakthroughs in battlefield capability and agility in decision making. Yet, the proposed advances come with substantial risk: automation-including AI- has persistent, critical vulnerabilities that must be thoroughly understood and adequately addressed if defense applications are to remain resilient and effective.Booz Allen Hamilito

    Understanding, Assessing, and Mitigating Safety Risks in Artificial Intelligence Systems

    Get PDF
    Prepared for: Naval Air Warfare Development Center (NAVAIR)Traditional software safety techniques rely on validating software against a deductively defined specification of how the software should behave in particular situations. In the case of AI systems, specifications are often implicit or inductively defined. Data-driven methods are subject to sampling error since practical datasets cannot provide exhaustive coverage of all possible events in a real physical environment. Traditional software verification and validation approaches may not apply directly to these novel systems, complicating the operation of systems safety analysis (such as implemented in MIL-STD 882). However, AI offers advanced capabilities, and it is desirable to ensure the safety of systems that rely on these capabilities. When AI tech is deployed in a weapon system, robot, or planning system, unwanted events are possible. Several techniques can support the evaluation process for understanding the nature and likelihood of unwanted events in AI systems and making risk decisions on naval employment. This research considers the state of the art, evaluating which ones are most likely to be employable, usable, and correct. Techniques include software analysis, simulation environments, and mathematical determinations.Naval Air Warfare Development CenterNaval Postgraduate School, Naval Research Program (PE 0605853N/2098)Approved for public release. Distribution is unlimite

    Accountability in Computer Systems

    Get PDF
    The article of record as published may be found at http://dx.doi.org/10.1093/oxfordhb/9780190067397.013.10This chapter addresses the relationship between AI systems and the concept of accountability. To understand accountability in the context of AI systems, one must begin by examining the various ways the term is used and the variety of concepts to which it is meant to refer. Accountability is often associated with transparency, the principle that systems and processes should be accessible to those affected through an understanding of their structure or function. For a computer system, this often means disclosure about the system’s existence, nature, and scope; scrutiny of its underlying data and reasoning approaches; and connection of the operative rules implemented by the system to the governing norms of its context. Transparency is a useful tool in the governance of computer systems, but only insofar as it serves accountability. There are other mechanisms available for building computer systems that support accountability of their creators and operators. Ultimately, accountability requires establishing answerability relationships that serve the interests of those affected by AI systems

    This Thing Called Fairness: Disciplinary Confusion Realizing a Value in Technology

    Get PDF
    The explosion in the use of software in important sociotechnical systems has renewed focus on the study of the way technical constructs reflect policies, norms, and human values. This effort requires the engagement of scholars and practitioners from many disciplines. And yet, these disciplines often conceptualize the operative values very differently while referring to them using the same vocabulary. The resulting conflation of ideas confuses discussions about values in technology at disciplinary boundaries. In the service of improving this situation, this paper examines the value of shared vocabularies, analytics, and other tools that facilitate conversations about values in light of these disciplinary specific conceptualizations, the role such tools play in furthering research and practice, outlines different conceptions of ``fairness''deployed in discussions about computer systems, and provides an analytic tool for interdisciplinary discussions and collaborations around the concept of fairness. We use a case study of risk assessments in criminal justice applications to both motivate our effort--describing how conflation of different concepts under the banner of ``fairness'' led to unproductive confusion--and illustrate the value of the fairness analytic by demonstrating how the rigorous analysis it enables can assist in identifying key areas of theoretical, political, and practical misunderstanding or disagreement, and where desired support alignment or collaboration in the absence of consensus

    System Safety Engineering for Social and Ethical ML Risks: A Case Study

    Full text link
    Governments, industry, and academia have undertaken efforts to identify and mitigate harms in ML-driven systems, with a particular focus on social and ethical risks of ML components in complex sociotechnical systems. However, existing approaches are largely disjointed, ad-hoc and of unknown effectiveness. Systems safety engineering is a well established discipline with a track record of identifying and managing risks in many complex sociotechnical domains. We adopt the natural hypothesis that tools from this domain could serve to enhance risk analyses of ML in its context of use. To test this hypothesis, we apply a "best of breed" systems safety analysis, Systems Theoretic Process Analysis (STPA), to a specific high-consequence system with an important ML-driven component, namely the Prescription Drug Monitoring Programs (PDMPs) operated by many US States, several of which rely on an ML-derived risk score. We focus in particular on how this analysis can extend to identifying social and ethical risks and developing concrete design-level controls to mitigate them.Comment: 14 pages, 5 figures, 3 tables. Accepted to 36th Conference on Neural Information Processing Systems, Workshop on ML Safety (NeurIPS 2022

    Mixcoin Anonymity for Bitcoin with accountable mixes (Full version)

    Get PDF
    Abstract. We propose Mixcoin, a protocol to facilitate anonymous payments in Bitcoin and similar cryptocurrencies. We build on the emergent phenomenon of currency mixes, adding an accountability mechanism to expose theft. We demonstrate that incentives of mixes and clients can be aligned to ensure that rational mixes will not steal. Our scheme is efficient and fully compatible with Bitcoin. Against a passive attacker, our scheme provides an anonymity set of all other users mixing coins contemporaneously. This is an interesting new property with no clear analog in better-studied communication mixes. Against active attackers our scheme offers similar anonymity to traditional communication mixes.

    Accountable Algorithms

    Get PDF
    Many important decisions historically made by people are now made by computers. Algorithms count votes, approve loan and credit card applications, target citizens or neighborhoods for police scrutiny, select taxpayers for IRS audit, grant or deny immigration visas, and more. The accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed to oversee human decisionmakers and often fail when applied to computers instead. For example, how do you judge the intent of a piece of software? Because automated decision systems can return potentially incorrect, unjustified, or unfair results, additional approaches are needed to make such systems accountable and governable. This Article reveals a new technological toolkit to verify that automated decisions comply with key standards of legal fairness. We challenge the dominant position in the legal literature that transparency will solve these problems. Disclosure of source code is often neither necessary (because of alternative techniques from computer science) nor sufficient (because of the issues analyzing code) to demonstrate the fairness of a process. Furthermore, transparency may be undesirable, such as when it discloses private information or permits tax cheats or terrorists to game the systems determining audits or security screening. The central issue is how to assure the interests of citizens, and society as a whole, in making these processes more accountable. This Article argues that technology is creating new opportunities—subtler and more flexible than total transparency—to design decisionmaking algorithms so that they better align with legal and policy objectives. Doing so will improve not only the current governance of automated decisions, but also—in certain cases—the governance of decisionmaking in general. The implicit (or explicit) biases of human decisionmakers can be difficult to find and root out, but we can peer into the “brain” of an algorithm: computational processes and purpose specifications can be declared prior to use and verified afterward. The technological tools introduced in this Article apply widely. They can be used in designing decisionmaking processes from both the private and public sectors, and they can be tailored to verify different characteristics as desired by decisionmakers, regulators, or the public. By forcing a more careful consideration of the effects of decision rules, they also engender policy discussions and closer looks at legal standards. As such, these tools have far-reaching implications throughout law and society. Part I of this Article provides an accessible and concise introduction to foundational computer science techniques that can be used to verify and demonstrate compliance with key standards of legal fairness for automated decisions without revealing key attributes of the decisions or the processes by which the decisions were reached. Part II then describes how these techniques can assure that decisions are made with the key governance attribute of procedural regularity, meaning that decisions are made under an announced set of rules consistently applied in each case. We demonstrate how this approach could be used to redesign and resolve issues with the State Department’s diversity visa lottery. In Part III, we go further and explore how other computational techniques can assure that automated decisions preserve fidelity to substantive legal and policy choices. We show how these tools may be used to assure that certain kinds of unjust discrimination are avoided and that automated decision processes behave in ways that comport with the social or legal standards that govern the decision. We also show how automated decisionmaking may even complicate existing doctrines of disparate treatment and disparate impact, and we discuss some recent computer science work on detecting and removing discrimination in algorithms, especially in the context of big data and machine learning. And lastly, in Part IV, we propose an agenda to further synergistic collaboration between computer science, law, and policy to advance the design of automated decision processes for accountabilit
    • …
    corecore