259,176 research outputs found

    Contracting for Algorithmic Accountability

    Get PDF
    As local, state, and federal governments increase their reliance on artificial intelligence (AI) decision-making tools designed and operated by private contractors, so too do public concerns increase over the accountability and transparency of such AI tools. But current calls to respond to these concerns by banning governments from using AI will only deny society the benefits that prudent use of such technology can provide. In this Article, we argue that government agencies should pursue a more nuanced and effective approach to governing the governmental use of AI by structuring their procurement contracts for AI tools and services in ways that promote responsible use of algorithms. By contracting for algorithmic accountability, government agencies can act immediately, without any need for new legislation, to reassure the public that governmental use of machine-learning algorithms will be deployed responsibly. Furthermore, unlike with the adoption of legislation, a contracting approach to AI governance can be tailored to meet the needs of specific agencies and particular uses. Contracting can also provide a means for government to foster improved deployment of AI in the private sector, as vendors that serve government agencies may shift their practices more generally to foster responsible AI practices with their private sector clients. As a result, we argue that government procurement officers and agency officials should consider several key governance issues in their contract negotiations with AI vendors. Perhaps the most fundamental issue relates to vendors’ claims to trade secret protection—an issue that we show can be readily addressed during the procurement process. Government contracts can be designed to balance legitimate protection of proprietary information with the vital public need for transparency about the design and operation of algorithmic systems used by government agencies. We further urge consideration in government contracting of other key governance issues, including data privacy and security, the use of algorithmic impact statements or audits, and the role for public participation in the development of AI systems. In an era of increasing governmental reliance on artificial intelligence, public contracting can serve as an important and tractable governance strategy to promote the responsible use of algorithmic tools

    The Needed Executive Actions to Address the Challenges of Artificial Intelligence

    Get PDF
    While various forms of artificial intelligence tools and applications have been in development for many years, it is the recent deployment of large language models (LLMs, also referred to here at "advanced AI"), such as OpenAI's ChatGPT, that has sparked both global interest and concern. Although advanced AI has recently captured public attention, other forms of AI—already in use in government and industry—also raise concerns due to their potential to inflict harm. The policy issues and recommendations below apply to currently available automated systems—with special consideration of LLM-based AI applications—and with an eye to other forms of advanced AI on the horizon.President Joe Biden should address the challenges and opportunities of AI with an immediate executive order to implement the Blueprint for an AI Bill of Rights and establish other safeguards to ensure automated systems deliver on their promise to improve lives, expand opportunity, and spur discovery

    Building Bridges: Generative Artworks to Explore AI Ethics

    Full text link
    In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society. Across academia, industry, and government bodies, a variety of endeavours are being pursued towards enhancing AI ethics. A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests. These different perspectives are often not understood, due in part to communication gaps.For example, AI researchers who design and develop AI models are not necessarily aware of the instability induced in consumers' lives by the compounded effects of AI decisions. Educating different stakeholders about their roles and responsibilities in the broader context becomes necessary. In this position paper, we outline some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools for surfacing different perspectives. We hope to spark interdisciplinary discussions about computational creativity broadly as a tool for enhancing AI ethics

    The Threat of Offensive AI to Organizations

    Get PDF
    AI has provided us with the ability to automate tasks, extract information from vast amounts of data, and synthesize media that is nearly indistinguishable from the real thing. However, positive tools can also be used for negative purposes. In particular, cyber adversaries can use AI to enhance their attacks and expand their campaigns. Although offensive AI has been discussed in the past, there is a need to analyze and understand the threat in the context of organizations. For example, how does an AI-capable adversary impact the cyber kill chain? Does AI benefit the attacker more than the defender? What are the most significant AI threats facing organizations today and what will be their impact on the future? In this study, we explore the threat of offensive AI on organizations. First, we present the background and discuss how AI changes the adversary’s methods, strategies, goals, and overall attack model. Then, through a literature review, we identify 32 offensive AI capabilities which adversaries can use to enhance their attacks. Finally, through a panel survey spanning industry, government and academia, we rank the AI threats and provide insights on the adversaries

    xxAI - Beyond Explainable AI

    Get PDF
    This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science.https://digitalcommons.unomaha.edu/isqafacbooks/1000/thumbnail.jp

    xxAI - Beyond Explainable AI

    Get PDF
    This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science

    Artificial intelligence for smart patient care: transforming future of nursing practice

    Get PDF
    Artificial intelligence (AI) in today’s era has been described as “the new electricity” as it continually transforms today’s world by affecting our way of living in many different spheres. Extensive government programs in most countries and enhanced technology investments thereof are set to rapidly advance AI. Consequently, healthcare teams will be majorly affected by intelligent tools and systems to be launched into healthcare and patient homecare settings. AI represents a variety of functions under an umbrella of terms like machine learning (ML), deep learning, computer vision, natural language processing (NLP) and automated speech recognition (ASR) technologies. Each of these when used individually or in combination has the potential to add intelligence to applications. Understanding of AI in medical field is crucial for nurses. Utilization of AI in nursing will accelerate innovation and fasten up decision making for them thus saving their time and improving patient outcome plus satisfaction with nursing care provided. Of utmost importance while partnering with AI is the requirement for AI to be safe and effective. A major concern for AI practitioners in the current scenario is managing bias. To realize the full potential of AI, stakeholders (AI developers and users) need to be confident about two aspects: (1) reliability and validity of the datasets used and (2) transparency of AI based system. Issues encompassing AI are novel yet complex, and there is still much to be learnt about it. Nursing experience, knowledge, and skills will transit into new ways of thinking and processing information. This will give new roles to nurses-like information integrators, data managers, informatics specialists, health coaches and above all deliverers of compassionate caring-not replaced by AI technologies yet supported by them

    xxAI - Beyond Explainable AI

    Get PDF
    This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science

    “Sorry I Didn’t Hear You.” The Ethics of Voice Computing and AI in High Risk Mental Health Populations

    Full text link
    This article examines the ethical and policy implications of using voice computing and artificial intelligence to screen for mental health conditions in low income and minority populations. Mental health is unequally distributed among these groups, which is further exacerbated by increased barriers to psychiatric care. Advancements in voice computing and artificial intelligence promise increased screening and more sensitive diagnostic assessments. Machine learning algorithms have the capacity to identify vocal features that can screen those with depression. However, in order to screen for mental health pathology, computer algorithms must first be able to account for the fundamental differences in vocal characteristics between low income minorities and those who are not. While researchers have envisioned this technology as a beneficent tool, this technology could be repurposed to scale up discrimination or exploitation. Studies on the use of big data and predictive analytics demonstrate that low income minority populations already face significant discrimination. This article urges researchers developing AI tools for vulnerable populations to consider the full ethical, legal, and social impact of their work. Without a national, coherent framework of legal regulations and ethical guidelines to protect vulnerable populations, it will be difficult to limit AI applications to solely beneficial uses. Without such protections, vulnerable populations will rightfully be wary of participating in such studies which also will negatively impact the robustness of such tools. Thus, for research involving AI tools like voice computing, it is in the research community\u27s interest to demand more guidance and regulatory oversight from the federal government
    • …
    corecore