707 research outputs found

    Machines Like Me: A Proposal on the Admissibility of Artificially Intelligent Expert Testimony

    Get PDF
    With the rapidly expanding sophistication of artificial intelligence systems, their reliability, and cost-effectiveness for solving problems, the current trend of admitting testimony based on artificially intelligent (AI) systems is only likely to grow. In that context, it is imperative for us to ask what rules of evidence judges today should use relating to such evidence. To answer that question, we provide an in-depth review of expert systems, machine learning systems, and neural networks. Based on that analysis, we contend that evidence from only certain types of AI systems meet the requirements for admissibility, while other systems do not. The break in admissible/inadmissible AI evidence is a function of the opaqueness of the underlying computational methodology of the AI system and the court’s ability to assess that methodology. The admission of AI evidence also requires us to navigate pitfalls including the difficulty of explaining AI systems’ methodology and issues as to the right to confront witnesses. Based on our analysis, we offer several policy proposals that would address weaknesses or lack of clarity in the current system. First, in light of the long-standing concern that jurors would allow expertise to overcome their own assessment of the evidence and blindly agree with the “infallible” result of advanced-computing AI, we propose that jury instruction commissions, judicial panels, circuits, or other parties who draft instructions consider adopting a cautionary instruction for AI-based evidence. Such an instruction should remind jurors that the AI-based evidence is solely one part of the analysis, the opinions so generated are only as good as the underlying analytical methodology, and ultimately, the decision to accept or reject the evidence, in whole or in part, should remain with the jury alone. Second, as we have concluded that the admission of AI-based evidence depends largely on the computational methodology underlying the analysis, we propose for AI evidence to be admissible, the underlying methodology must be transparent because the judicial assessment of AI technology relies on the ability to understand how it functions

    The Right to a Glass Box: Rethinking the Use of Artificial Intelligence in Criminal Justice

    Get PDF
    Artificial intelligence (“AI”) increasingly is used to make important decisions that affect individuals and society. As governments and corporations use AI more pervasively, one of the most troubling trends is that developers so often design it to be a “black box.” Designers create AI models too complex for people to understand or they conceal how AI functions. Policymakers and the public increasingly sound alarms about black box AI. A particularly pressing area of concern has been criminal cases, in which a person’s life, liberty, and public safety can be at stake. In the United States and globally, despite concerns that technology may deepen pre-existing racial disparities and overreliance on incarceration, black box AI has proliferated in areas such as: DNA mixture interpretation; facial recognition; recidivism risk assessments; and predictive policing. Despite constitutional criminal procedure protections, judges have often embraced claims that AI should remain undisclosed in court. Both champions and critics of AI, however, mistakenly assume that we inevitably face a trade-off: black box AI may be incomprehensible, but it performs more accurately. But that is not so. In this Article, we question the basis for this assumption, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how “glass box” AI—designed to be fully interpretable by people—can be more accurate than the black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. After all, criminal justice data is notoriously error prone, and it may reflect preexisting racial and socioeconomic disparities. Unless AI is interpretable, decisionmakers like lawyers and judges who must use it will not be able to detect those underlying errors, much less understand what the AI recommendation means. Debunking the black box performance myth has implications for constitutional criminal procedure rights and legislative policy. Judges and lawmakers have been reluctant to impair the perceived effectiveness of black box AI by requiring disclosures to the defense. Absent some compelling—or even credible—government interest in keeping AI black box, and given the substantial constitutional rights and public safety interests at stake, we argue that the burden rests on the government to justify any departure from the norm that all lawyers, judges, and jurors can fully understand AI. If AI is to be used at all in settings like the criminal system—and we do not suggest that it necessarily should—the presumption should be in favor of glass box AI, absent strong evidence to the contrary. We conclude by calling for national and local regulation to safeguard, in all criminal cases, the right to glass box AI

    \u3cem\u3eTechnological Tethereds\u3c/em\u3e: Potential Impact of Untrustworthy Artificial Intelligence in Criminal Justice Risk Assessment Instruments

    Full text link
    Issues of racial inequality and violence are front and center today, as are issues surrounding artificial intelligence (“AI”). This Article, written by a law professor who is also a computer scientist, takes a deep dive into understanding how and why hacked and rogue AI creates unlawful and unfair outcomes, particularly for persons of color. Black Americans are disproportionally featured in criminal justice, and their stories are obfuscated. The seemingly endless back-to-back murders of George Floyd, Breonna Taylor, Ahmaud Arbery, and heartbreakingly countless others have finally shaken the United States from its slumbering journey towards intentional criminal justice reform. Myths about Black crime and criminals are embedded in the data collected by AI and do not tell the truth about race and crime. However, the number of Black people harmed by hacked and rogue AI will dwarf all historical records, and the gravity of harm is incomprehensible. The lack of technical transparency and legal accountability leaves wrongfully convicted defendants without legal remedies if they are unlawfully detained based on a cyberattack, faulty or hacked data, or rogue AI. Scholars and engineers acknowledge that the artificial intelligence that is giving recommendations to law enforcement, prosecutors, judges, and parole boards lacks the common sense of an eighteen-month-old child. This Article reviews the ways AI is used in the legal system and the courts’ response to this use. It outlines the design schemes of proprietary risk assessment instruments used in the criminal justice system, outlines potential legal theories for victims, and provides recommendations for legal and technical remedies to victims of hacked data in criminal justice risk assessment instruments. It concludes that, with proper oversight, AI can increase fairness in the criminal justice system, but without this oversight, AI-based products will further exacerbate the extinguishment of liberty interests enshrined in the Constitution. According to anti-lynching advocate, Ida B. Wells-Barnett, “The way to right wrongs is to turn the light of truth upon them.” Thus, transparency is vital to safeguarding equity through AI design and must be the first step. The Article seeks ways to provide that transparency, for the benefit of all America, but particularly persons of color who are far more likely to be impacted by AI deficiencies. It also suggests legal reforms that will help plaintiffs recover when AI goes rogue

    South American Expert Roundtable : increasing adaptive governance capacity for coping with unintended side effects of digital transformation

    Get PDF
    This paper presents the main messages of a South American expert roundtable (ERT) on the unintended side effects (unseens) of digital transformation. The input of the ERT comprised 39 propositions from 20 experts representing 11 different perspectives. The two-day ERT discussed the main drivers and challenges as well as vulnerabilities or unseens and provided suggestions for: (i) the mechanisms underlying major unseens; (ii) understanding possible ways in which rebound effects of digital transformation may become the subject of overarching research in three main categories of impact: development factors, society, and individuals; and (iii) a set of potential action domains for transdisciplinary follow-up processes, including a case study in Brazil. A content analysis of the propositions and related mechanisms provided insights in the genesis of unseens by identifying 15 interrelated causal mechanisms related to critical issues/concerns. Additionally, a cluster analysis (CLA) was applied to structure the challenges and critical developments in South America. The discussion elaborated the genesis, dynamics, and impacts of (groups of) unseens such as the digital divide (that affects most countries that are not included in the development of digital business, management, production, etc. tools) or the challenge of restructuring small- and medium-sized enterprises (whose service is digitally substituted by digital devices). We identify specific issues and effects (for most South American countries) such as lack of governmental structure, challenging geographical structures (e.g., inclusion in high-performance transmission power), or the digital readiness of (wide parts) of society. One scientific contribution of the paper is related to the presented methodology that provides insights into the phenomena, the causal chains underlying “wanted/positive” and “unwanted/negative” effects, and the processes and mechanisms of societal changes caused by digitalization

    Public Scrutiny of Automated Decisions: Early Lessons and Emerging Methods

    Get PDF
    Automated decisions are increasingly part of everyday life, but how can the public scrutinize, understand, and govern them? To begin to explore this, Omidyar Network has, in partnership with Upturn, published Public Scrutiny of Automated Decisions: Early Lessons and Emerging Methods.The report is based on an extensive review of computer and social science literature, a broad array of real-world attempts to study automated systems, and dozens of conversations with global digital rights advocates, regulators, technologists, and industry representatives. It maps out the landscape of public scrutiny of automated decision-making, both in terms of what civil society was or was not doing in this nascent sector and what laws and regulations were or were not in place to help regulate it.Our aim in exploring this is three-fold:1) We hope it will help civil society actors consider how much they have to gain in empowering the public to effectively scrutinize, understand, and help govern automated decisions; 2) We think it can start laying a policy framework for this governance, adding to the growing literature on the social and economic impact of such decisions; and3) We're optimistic that the report's findings and analysis will inform other funders' decisions in this important and growing field

    COBRA Frames: Contextual Reasoning about Effects and Harms of Offensive Statements

    Full text link
    Warning: This paper contains content that may be offensive or upsetting. Understanding the harms and offensiveness of statements requires reasoning about the social and situational context in which statements are made. For example, the utterance "your English is very good" may implicitly signal an insult when uttered by a white man to a non-white colleague, but uttered by an ESL teacher to their student would be interpreted as a genuine compliment. Such contextual factors have been largely ignored by previous approaches to toxic language detection. We introduce COBRA frames, the first context-aware formalism for explaining the intents, reactions, and harms of offensive or biased statements grounded in their social and situational context. We create COBRACORPUS, a dataset of 33k potentially offensive statements paired with machine-generated contexts and free-text explanations of offensiveness, implied biases, speaker intents, and listener reactions. To study the contextual dynamics of offensiveness, we train models to generate COBRA explanations, with and without access to the context. We find that explanations by context-agnostic models are significantly worse than by context-aware ones, especially in situations where the context inverts the statement's offensiveness (29% accuracy drop). Our work highlights the importance and feasibility of contextualized NLP by modeling social factors.Comment: Accepted to Findings of ACL 202

    Behind the scenes of emerging technologies Opportunities, challenges, and solution approaches along a socio-technical continuum

    Get PDF
    Digitalization is a socio-technical phenomenon that shapes our lives as individuals, economies, and societies. The perceived complexity of technologies continues to increase, and technology convergence makes a clear separation between technologies impossible. A good example of this is the Internet of Things (IoT) with its embedded Artificial Intelligence (AI). Furthermore, a separation of the social and the technical component has become near enough impossible, for which there is increasing awareness in the Information Systems (IS) community. Overall, emerging technologies such as AI or IoT are becoming less understandable and transparent, which is evident for instance when AI is described in terms of a black box. This opacity undermines humans trust in emerging technologies, which, however, is crucial for both its usage and spread, especially as emerging technologies start to perform tasks that bear high risks for humans, such as autonomous driving. Critical perspectives on emerging technologies are often discussed in terms of ethics, including such aspects as the responsibility for decisions made by algorithms, the limited data privacy, and the moral values that are encoded in technology. In sum, the varied opportunities that come with digitalization are accompanied by significant challenges. Research on the negative ramifications of AI is crucial if we are to foster a human-centered technological development that is not simply driven by opportunities but by utility for humanity. As the IS community is positioned at the intersection of the technological and the social context, it plays a central role in finding answers to the question as to how the advantages outweigh the challenges that come with emerging technologies. Challenges are examined under the label of dark side of IS, a research area which receives considerably less attention in existing literature than the positive aspects (Gimpel & Schmied, 2019). With its focus on challenges, this dissertation aims to counterbalance this. Since the remit of IS research is the entire information system, rather than merely the technology, humanistic and instrumental goals ought to be considered in equal measure. This dissertation follows calls for research for a healthy distribution along the so-called socio-technical continuum (Sarker et al., 2019), that broadens its focus to include the social as well as the technical, rather than looking at one or the other. With that in mind, this dissertation aims to advance knowledge on IS with regard to opportunities, and in particular with a focus on challenges of two emerging technologies, IoT and AI, along the socio-technical continuum. This dissertation provides novel insights for individuals to better understand opportunities, but in particular possible negative side effects. It guides organizations on how to address these challenges and suggests not only the necessity of further research along the socio-technical continuum but also several ideas on where to take this future research. Chapter 2 contributes to research on opportunities and challenges of IoT. Section 2.1 identifies and structures opportunities that IoT devices provide for retail commerce customers. By conducting a structured literature review, affordances are identified, and by examining a sample of 337 IoT devices, completeness and parsimony are validated. Section 2.2 takes a close look at the ethical challenges posed by IoT, also known as IoT ethics. Based on a structured literature review, it first identifies and structures IoT ethics, then provides detailed guidance for further research in this important and yet under-appreciated field of study. Together, these two research articles underline that IoT has the potential to radically transform our lives, but they also illustrate the urgent need for further research on possible ethical issues that are associated with IoTs specific features. Chapter 3 contributes to research on AI along the socio-technical continuum. Section 3.1 examines algorithms underlying AI. Through a structured literature review and semi-structured interviews analyzed with a qualitative content analysis, this section identifies, structures and communicates concerns about algorithmic decision-making and is supposed to improve offers and services. Section 3.2 takes a deep dive into the concept of moral agency in AI to discuss whether responsibility in human-computer interaction can be grasped better with the concept of agency. In section 3.3, data from an online experiment with a self-developed AI system is used to examine the role of a users domain-specific expertise in trusting and following suggestions from AI decision support systems. Finally, section 3.4 draws on design science research to present a framework for ethical software development that considers ethical issues from the beginning of the design and development process. By looking at the multiple facets of this topic, these four research articles ought to guide practitioners in deciding which challenges to consider during product development. With a view to subsequent steps, they also offer first ideas on how these challenges could be addressed. Furthermore, the articles offer a basis for further, solution-oriented research on AIs challenges and encourage users to form their own, informed, opinions.Die Digitalisierung ist ein sozio-technisches Phänomen, das unser persönliches Leben, aber auch die Wirtschaft und die gesamte Gesellschaft prägt. Die wahrgenommene Komplexität von Technologie nimmt stetig zu. Die Technologiekonvergenz macht eine klare Trennung zwischen Technologien praktisch unmöglich, wofür das Internet der Dinge (IoT) mit seiner eingebetteten Künstlichen Intelligenz (KI) ein gutes Beispiel ist. Darüber hinaus wird eine Trennung der sozialen und der technischen Komponente nahezu unmöglich, wofür es ein steigendes Bewusstsein in der Information Systems (IS) Community gibt. Insgesamt werden aufstrebende Technologien wie KI oder IoT weniger verständlich und transparent, was sich beispielsweise darin zeigt, dass KI der Begriff der Black Box zugeschrieben wird. Die Undurchsichtigkeit untergräbt das Vertrauen der Menschen in aufstrebende Technologien, das jedoch für die Nutzung und Verbreitung dieser entscheidend ist, insbesondere wenn Technologien Aufgaben übernehmen oder unterstützen, die hohe Risiken für den Menschen bergen, wie z. B. autonomes Fahren. Kritische Perspektiven auf neue Technologien werden oft unter dem Begriff der Ethik diskutiert, darunter Aspekte wie die Verantwortung für Entscheidungen, die von Algorithmen getroffen werden, moralische Werte, die in die Technologie eingebettet sind, und Datenschutz. Zusammenfassend lässt sich sagen, dass die vielfältigen Chancen der Digitalisierung mit Herausforderungen einhergehen. Die Forschung zu Risiken und Nebenwirkungen ist entscheidend, um eine menschenzentrierte technologische Entwicklung zu fördern, die nicht nur von den Möglichkeiten, sondern insbesondere vom Nutzenstiften für die Menschheit getrieben ist. An der Schnittstelle zwischen Technologie und sozialem Kontext angesiedelt, spielt die IS-Community eine wichtige Rolle bei der Suche nach Antworten auf die Frage, wie die Vorteile die Risiken neuer Technologien überwiegen können. Herausforderungen werden im Forschungsbereich dark side of IS untersucht, welcher in der bestehenden Literatur deutlich weniger Aufmerksamkeit erhält als die positiven Aspekte (Gimpel & Schmied, 2019). Dem möchte diese Dissertation ein Stück weit entgegenwirken, indem ein Fokus auf die Herausforderungen gelegt wird. Da in der IS-Forschung das gesamte Informationssystem und nicht nur die Technologie im Mittelpunkt der Betrachtung steht, sollen humanistische und instrumentelle Ziele gleichermaßen berücksichtigt werden. Darüber hinaus folgt diese Dissertation dem Aufruf nach einer angemessenen Verteilung der Forschung entlang des sogenannten sozio-technischen Kontinuums (Sarker et al., 2019) und löst sich somit von Forschung, die am sozialen oder technischen Endpunkt des Kontinuums angesiedelt ist. Zusammenfassend zielt diese Dissertation darauf ab, das Wissen über IS im Hinblick auf die Chancen und insbesondere die Herausforderungen entlang des sozio-technischen Kontinuums der aufkommenden Technologien IoT und KI voranzutreiben. Damit liefert die Dissertation neue Einblicke für Individuen, um die Möglichkeiten, aber insbesondere die potenziellen negativen Nebenwirkungen der Digitalisierung besser zu verstehen, bietet Orientierung für Organisationen, um diese Herausforderungen zu adressieren, und veranschaulicht die Notwendigkeit und Ideen für weitere Forschung entlang des sozio-technischen Kontinuums. Kapitel 2 leistet einen Beitrag zur Forschung über Chancen und Herausforderungen des IoT. Kapitel 2.1 identifiziert und strukturiert Chancen von IoT-Geräten für Kunden im Einzelhandel. Mit einer strukturierten Literaturrecherche werden Affordanzen von IoT-Geräten für Kunden identifiziert und mit einer Stichprobe von 337 IoT-Geräten wird eine Validierung hinsichtlich Vollständigkeit und Sparsamkeit durchgeführt. Kapitel 2.2 beschäftigt sich mit ethischen Herausforderungen des IoT, genannt IoT-Ethik. Basierend auf einer strukturierten Literaturrecherche identifiziert und strukturiert es die IoT-Ethik und gibt detaillierte Hinweise für die weitere Erforschung dieses wichtigen, aber noch zu wenig erforschten Feldes. Mit diesen beiden Forschungsartikeln unterstreicht diese Dissertation das Potenzial des IoT, unser Leben radikal zu verändern, verdeutlicht aber auch den Bedarf an weiterer Forschung zu potenziellen ethischen Fragen, die mit den spezifischen Eigenschaften des IoT verbunden sind. Kapitel 3 trägt zur Forschung über KI entlang des sozio-technischen Kontinuums bei. Kapitel 3.1 untersucht die Algorithmen, die KI zugrunde liegen. Eine strukturierte Literaturrecherche und semi-strukturierte Interviews, die mit einer qualitativen Inhaltsanalyse analysiert werden, zielen darauf ab, Bedenken gegenüber algorithmischer Entscheidungsfindung zu identifizieren, zu strukturieren und zu kommunizieren, um darauf basierend Angebote und Dienstleistungen zu verbessern. Kapitel 3.2 bietet eine ethische Vertiefung in das Konzept der moralischen Handlungsfähigkeit und untersucht, ob Verantwortung in der Mensch-Computer-Interaktion mit dem Konzept der Agency besser erfasst werden kann. In Kapitel 3.3 wird anhand von Daten aus einem Online-Experiment mit einem selbst entwickelten KI-System untersucht, welche Rolle das domänenspezifische Fachwissen der Nutzer für das Vertrauen in und das Befolgen von Vorschlägen von KI-Entscheidungsunterstützungssystemen spielt. Schließlich wird in Kapitel 3.4 auf der Grundlage designwissenschaftlicher Forschung ein Rahmenwerk für ethische Softwareentwicklung vorgestellt, das ethische Aspekte bereits zu Beginn des Design- und Entwicklungsprozesses berücksichtigt. Diese vier Forschungsartikel können Praktikern als Orientierung dienen, welche Herausforderungen bei der Produktentwicklung zu berücksichtigen sind und bieten erste Ideen, wie sie diese angehen können. Darüber hinaus bieten die Forschungsergebnisse eine Grundlage für weitere, lösungsorientierte Forschung zu den Herausforderungen von KI und ermutigen Nutzer, sich eine eigene, fundierte Meinung zu bilden

    The future is now : liberal democracies and the challenge of artificial intelligence

    Get PDF
    Current technological developments, such as Artificial Intelligence, Big Data, and Machine Learning, have a significant impact on multiple – if not all – dimensions of our lives. They are deployed, used, and fed by people in a social, cultural, and political context. They can also be used to change and influence that same context, namely the governmental system, the social dimension, and individuals' rights and liberties in a political regime. This dissertation will entail analyze how these technologies are being deployed in different dimensions of liberal democratic societies This analysis will delve into the dynamic of Big Tech data collection, the challenges posed by algorithmic decision-making tools on democratic institutions and the profound and tacit impact of surveillance technology. It will focus on the major challenges these technological deployments may represent, separately and as a whole, on a human rights and ethical perspective as well as an institutional one. The dissertation aims to contribute to a better understanding of the impact of diverse uses of AI and policy options in the socio-political dimension, instigating awareness and reflection on the paths being built. This reassessment is particularly focused on the possibility of AI as a strengthener and enhancer of liberal democracies, their values, and institutional structures

    A TAXONOMY OF MACHINE LEARNING-BASED FRAUD DETECTION SYSTEMS

    Get PDF
    As fundamental changes in information systems drive digitalization, the heavy reliance on computers today significantly increases the risk of fraud. Existing literature promotes machine learning as a potential solution approach for the problem of fraud detection as it is able able to detect patterns in large datasets efficiently. However, there is a lack of clarity and awareness on which components and functionalities of machine learning-based fraud detection systems exist and how these systems can be classified consistently. We draw on 54 identified relevant machine learning-based fraud detection systems to address this research gap and develop a taxonomic scheme. By deriving three archetypes of machine learning-based fraud detection systems, the taxonomy paves the way for research and practice to understand and advance fraud detection knowledge to combat fraud and abuse
    • …
    corecore