651,661 research outputs found

    Artificial intelligence as law:Presidential address to the seventeenth international conference on artificial intelligence and law

    Get PDF
    Information technology is so ubiquitous and AI's progress so inspiring that also legal professionals experience its benefits and have high expectations. At the same time, the powers of AI have been rising so strongly that it is no longer obvious that AI applications (whether in the law or elsewhere) help promoting a good society; in fact they are sometimes harmful. Hence many argue that safeguards are needed for AI to be trustworthy, social, responsible, humane, ethical. In short: AI should be good for us. But how to establish proper safeguards for AI? One strong answer readily available is: consider the problems and solutions studied in AI & Law. AI & Law has worked on the design of social, explainable, responsible AI aligned with human values for decades already, AI & Law addresses the hardest problems across the breadth of AI (in reasoning, knowledge, learning and language), and AI & Law inspires new solutions (argumentation, schemes and norms, rules and cases, interpretation). It is argued that the study of AI as Law supports the development of an AI that is good for us, making AI & Law more relevant than ever

    AI, Equity, and the IP Gap

    Get PDF
    Artificial intelligence (AI) has helped determine vaccine recipients, prioritize emergency room admissions, and ascertain individual hires, sometimes doing so inequitably. As we emerge from the Pandemic, technological progress and efficiency demands continue to press all areas of the law, including intellectual property (IP) law, toward incorporating more AI into legal practice. This may be good when AI promotes economic and social justice in the IP system. However, AI may amplify inequity as biased developers create biased algorithms with biased inputs or rely on biased proxies. This Article argues that policymakers need to take a thoughtful and concerted approach to graft AI into IP law and practice if social justice principles of access, inclusion, and empowerment flow from their union. It explores what it looks like to obtain AI justice in the IP context and focuses on two areas where IP law impedes equitable AI-related outcomes. The first involves the civil rights concerns that stem from trade secrets blocking access and deflecting accountability in biased algorithms or data. The second concerns the patent and copyright doctrine biases perpetuating historical inequity in AI-augmented processes. The Article also ad- dresses how equity by design should look and provides a roadmap for implementing equity audits to mitigate bias. Finally, it briefly examines how AI would assist with adjudicating equitable IP law doctrines, which also tests the outer limits of what bounded AI processes can do

    Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans

    Get PDF
    We are currently unable to specify human goals and societal values in a way that reliably directs AI behavior. Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives. "Law Informs Code" is the research agenda embedding legal knowledge and reasoning in AI. Similar to how parties to a legal contract cannot foresee every potential contingency of their future relationship, and legislators cannot predict all the circumstances under which their proposed bills will be applied, we cannot ex ante specify rules that provably direct good AI behavior. Legal theory and practice have developed arrays of tools to address these specification problems. For instance, legal standards allow humans to develop shared understandings and adapt them to novel situations. In contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior through the threat of sanction), leveraged as an expression of how humans communicate their goals, and what society values, Law Informs Code. We describe how data generated by legal processes (methods of law-making, statutory interpretation, contract drafting, applications of legal standards, legal reasoning, etc.) can facilitate the robust specification of inherently vague human goals. This increases human-AI alignment and the local usefulness of AI. Toward society-AI alignment, we present a framework for understanding law as the applied philosophy of multi-agent alignment. Although law is partly a reflection of historically contingent political power - and thus not a perfect aggregation of citizen preferences - if properly parsed, its distillation offers the most legitimate computational comprehension of societal values available. If law eventually informs powerful AI, engaging in the deliberative political process to improve law takes on even more meaning.Comment: Forthcoming in Northwestern Journal of Technology and Intellectual Property, Volume 2

    Artificially Intelligent Boards and the Future of Delaware Corporate Law

    Full text link
    This article argues that the prospects for Artificial Intelligence (AI) to impact corporate law are at once over- and under-stated, focusing on the law of Delaware – the predominant jurisdiction of incorporation for US public companies. Claims that AI systems might displace human directors not only exaggerate AI’s foreseeable technological potential, but ignore doctrinal and institutional impediments intrinsic to Delaware’s competitive model – notably, heavy reliance on nuanced applications of the fiduciary duty of loyalty by a true court of equity. At the same time, however, there are discrete AI applications that might not merely be accommodated by Delaware corporate law, but perhaps eventually required. This would appear most likely in the oversight context, where loyalty has been interpreted to require good faith effort to adopt a reasonable compliance monitoring system, an approach driven by an implicit cost-benefit analysis that could lean decisively in favour of AI-based approaches in the foreseeable future

    Artificially Intelligent Boards and the Future of Delaware Corporate Law

    Full text link
    The prospects for Artificial Intelligence (AI) to impact the development of Delaware corporate law are at once over- and under-stated. As a general matter, claims to the effect that AI systems might ultimately displace human directors not only exaggerate the foreseeable technological potential of these systems, but also tend to ignore doctrinal and institutional impediments intrinsic to Delaware\u27s competitive model – notably, heavy reliance on nuanced and context-specific applications of the fiduciary duty of loyalty by a true court of equity. At the same time, however, there are specific applications of AI systems that might not merely be accommodated by Delaware corporate law, but perhaps eventually required. Such an outcome would appear most plausible in the oversight context, where fiduciary loyalty has been interpreted to require good faith effort to adopt a reasonable compliance monitoring system, an approach driven by an implicit cost-benefit analysis that could lean decisively in favor of AI-based approaches in the foreseeable future. This article discusses the prospects for AI to impact Delaware corporate law in both general and specific respects and evaluates their significance. Section II describes the current state of the technology and argues that AI systems are unlikely to develop to the point that they could displace the full range of functions performed by human boards in the foreseeable future. Section III, then, argues that even if the technology were to achieve more impressive results in the near-term than I anticipate, acceptance of non-human directors would likely be blunted by doctrinal and institutional structures that place equity at the very heart of Delaware corporate law. Section IV, however, suggests that there are nevertheless discrete areas within Delaware corporate law where reliance by human directors upon AI systems for assistance in board decision-making might not merely be accommodated, but eventually required. This appears particularly plausible in the oversight context, where fiduciary loyalty has become intrinsically linked with adoption of compliance monitoring systems that are themselves increasingly likely to incorporate AI technologies. Section V briefly concludes

    “You’ve Got a Friend in Me”: Helping Students Help AI

    Full text link
    ChatGPT and its family of generative tools may seem new, but the process that ChatGPT imitates is as old as Egyptian papyri: The end-user still had to adapt the form text to each person’s unique situation. Similarly, modern attorneys may use AI to adapt legal documents to their clients’ needs. But they must also learn how to spot problems in AI-generated documents — omissions, wrongful additions, inaccurate law, legalese, and poor typography. They need to instruct ChatGPT or other generative AI to continue revising until the document reflects best practices. In short, our students as future attorneys need to know how to help AI be helpful. A student who hasn’t learned how to approach drafting or redrafting a good legal document will be at AI’s mercy rather than being able to use the AI tech to create good documents. A clueless student using AI is really no better off than generations of lawyers who have blindly recycled old forms that are full of problems. Specifically, Charles and Cooney’s presentation will cover these topics: Brief history of legal forms (positive and negative) Survey results of how other disciplines are using AI A real-to-life positive approach to working with AI to generate legal documents Experience from incorporating AI into teaching Research & Writing, Advocacy, and Drafting How to arm students with knowledge about what makes a document sound, navigable, and enforceabl

    The Need for Good Old Fashioned AI and Law

    Get PDF

    Challenge 6: Ethical, legal, economic, and social implications

    Get PDF
    In six decades of history, AI has become a mature and strategic discipline, successfully embedded in mainstream ICT and powering innumerable online applications and platforms. Several official documents stating specific AI policies have been produced by international organisations ( like the OCDE ), regional bodies ( EU ), several countries ( US, China, Spain, Germany, UK, Sweden, Brazil, Mexico...) as well as major AI-powered firms ( Google, Facebook, Amazon ). These examples demonstrate public interest and awareness of the economic and societal value of AI and the urgency of discussing the ethical, legal, economic and social implications of deploying AI systems on a massive scale. There is widespread agreement about the relevancy of addressing ethical aspects of AI, an urgency to demonstrate AI is used for the common good, and the need for better training, education and regulation to foster responsible research and innovation in AI. This chapter is organised around four main areas : ethics, law, economics and society ( ELES ). These areas shape the development of AI research and innovation, which in turn, influence these four areas of human activity. This interplay opens questions and demands new methods, objectives and ways to design future technologies. This chapter identifies the main impacts and salient challenges in each of these four areas.Peer reviewe
    • …
    corecore