1,320 research outputs found

    Legal knowledge-based systems: new directions in system design

    Get PDF
    This thesis examines and critiques the concept of 'legal knowledge-based’ systems. Work on legal knowledge-based systems is dominated by work in 'artificial intelligence and law’. It seeks to automate the application of law and to automate the solution of legal problems. Automation however, has proved elusive. In contrast to such automation, this thesis proposes the creation of legal knowledge-based systems based on the concept of augmentation of legal work. Focusing on systems that augment legal work opens new possibilities for system creation and use. To inform how systems might augment legal work, this thesis examines philosophy, psychology and legal theory for information they provide on how processes of legal reasoning operate. It is argued that, in contrast to conceptions of law adopted in artificial intelligence and law, 'sensemaking' provides a useful perspective with which to create systems. It is argued that visualisation, and particularly diagrams, are an important and under considered element of reasoning and that producing systems that support diagramming of processes of legal reasoning would provide useful support for legal work. This thesis reviews techniques for diagramming aspects of sensemaking. In particular this thesis examines standard methods for diagramming arguments and methods for diagramming reasoning. These techniques are applied in the diagramming of legal judgments. A review is conducted of systems that have been constructed to support the construction of diagrams of argument and reasoning. Drawing upon these examinations, this thesis highlights the necessity of appropriate representations for supporting reasoning. The literature examining diagramming for reasoning support provides little discussion of appropriate representations. This thesis examines theories of representation for insight they can provide into the design of appropriate representations. It is concluded that while the theories of representation that are examined do not determine what amounts to a good representation, guidelines for the design and choice of representations can be distilled. These guidelines cannot map the class of legal knowledge-based systems that augment legal sensemaking, they can however, be used to explore this class and to inform construction of systems

    Highly Automated Vehicles & Discrimination against Low-Income Persons

    Get PDF
    Law reform in the United States often reflects a structural bias that advances narrow business interests without addressing broader public interest concerns.\u27 This bias may appear by omitting protective language in laws or regulations which address a subject matter area, such as permitting the testing of highly automated vehicles ( HA Vs ) on public roads, while omitting a requirement for a reasonable level of insurance as a condition to obtain a testing permit.2 This Article explores certain social and economic justice implications of laws and regulations governing the design, testing, manufacture, and deployment of HA Vs which might advance a business interest without taking account of the public interest. This Article contrasts the steps that might be taken to ensure the economic well-being of low-income persons with the current state of HAV regulation. 3 This Article recommends steps to correct some of this bias

    Administrative Performance of “No-Fault” Compensation for Medical Injury

    Get PDF
    No-fault is the leading alternative to traditional liability systems for resolving medically caused injuries, and policy interest in such reform reflects numerous concerns with the traditional tort system as it operates in the medical field through malpractice insurance. The administrative experience of the Florida and Virginia no-fault programs is examined

    Can Algorithms Promote Fair Use?

    Get PDF
    In the past few years, advances in big data, machine learning and artificial intelligence have generated many questions in the intellectual property field. One question that has attracted growing attention concerns whether algorithms can be better deployed to promote fair use in copyright law. The debate on the feasibility of developing automated fair use systems is not new; it can be traced back to more than a decade ago. Nevertheless, recent technological advances have invited policymakers and commentators to revisit this earlier debate.As part of the Symposium on Intelligent Entertainment: Algorithmic Generation and Regulation of Creative Works, this Article examines whether algorithms can be better deployed to promote fair use in copyright law. It begins by explaining why policymakers and commentators have remained skeptical about such deployment. The article then builds the case for greater algorithmic deployment to promote fair use. It concludes by identifying areas to which policymakers and commentators should pay greater attention if automated fair use systems are to be developed

    The Right to Contest AI

    Get PDF
    Artificial intelligence (AI) is increasingly used to make important decisions, from university admissions selections to loan determinations to the distribution of COVID-19 vaccines. These uses of AI raise a host of concerns about discrimination, accuracy, fairness, and accountability. In the United States, recent proposals for regulating AI focus largely on ex ante and systemic governance. This Article argues instead—or really, in addition—for an individual right to contest AI decisions, modeled on due process but adapted for the digital age. The European Union, in fact, recognizes such a right, and a growing number of institutions around the world now call for its establishment. This Article argues that despite considerable differences between the United States and other countries, establishing the right to contest AI decisions here would be in keeping with a long tradition of due process theory. This Article then fills a gap in the literature, establishing a theoretical scaffolding for discussing what a right to contest should look like in practice. This Article establishes four contestation archetypes that should serve as the bases of discussions of contestation both for the right to contest AI and in other policy contexts. The contestation archetypes vary along two axes: from contestation rules to standards and from emphasizing procedure to establishing substantive rights. This Article then discusses four processes that illustrate these archetypes in practice, including the first in-depth consideration of the GDPR’s right to contestation for a U.S. audience. Finally, this Article integrates findings from these investigations to develop normative and practical guidance for establishing a right to contest AI

    The Right to Contest AI

    Get PDF
    Artificial intelligence (AI) is increasingly used to make important decisions, from university admissions selections to loan determinations to the distribution of COVID-19 vaccines. These uses of AI raise a host of concerns about discrimination, accuracy, fairness, and accountability. In the United States, recent proposals for regulating AI focus largely on ex ante and systemic governance. This Article argues instead—or really, in addition—for an individual right to contest AI decisions, modeled on due process but adapted for the digital age. The European Union, in fact, recognizes such a right, and a growing number of institutions around the world now call for its establishment. This Article argues that despite considerable differences between the United States and other countries, establishing the right to contest AI decisions here would be in keeping with a long tradition of due process theory. This Article then fills a gap in the literature, establishing a theoretical scaffolding for discussing what a right to contest should look like in practice. This Article establishes four contestation archetypes that should serve as the bases of discussions of contestation both for the right to contest AI and in other policy contexts. The contestation archetypes vary along two axes: from contestation rules to standards and from emphasizing procedure to establishing substantive rights. This Article then discusses four processes that illustrate these archetypes in practice, including the first in-depth consideration of the GDPR’s right to contestation for a U.S. audience. Finally, this Article integrates findings from these investigations to develop normative and practical guidance for establishing a right to contest AI

    The Advocate, Vol. 22, No. 2, 1992

    Get PDF
    https://dc.suffolk.edu/ad-mag/1065/thumbnail.jp

    Can Algorithms Promote Fair Use?

    Get PDF
    In the past few years, advances in big data, machine learning and artificial intelligence have generated many questions in the intellectual property field. One question that has attracted growing attention concerns whether algorithms can be better deployed to promote fair use in copyright law. The debate on the feasibility of developing automated fair use systems is not new; it can be traced back to more than a decade ago. Nevertheless, recent technological advances have invited policymakers and commentators to revisit this earlier debate.As part of the Symposium on Intelligent Entertainment: Algorithmic Generation and Regulation of Creative Works, this Article examines whether algorithms can be better deployed to promote fair use in copyright law. It begins by explaining why policymakers and commentators have remained skeptical about such deployment. The article then builds the case for greater algorithmic deployment to promote fair use. It concludes by identifying areas to which policymakers and commentators should pay greater attention if automated fair use systems are to be developed

    The Reasonableness Machine

    Get PDF
    Automation might someday allow for the inexpensive creation of highly contextualized and effective laws. If that ever comes to pass, however, it will not be on a blank slate. Proponents will face the question of how to computerize bedrock aspects of our existing law, some of which are legal standards—norms that use evaluative, even moral, criteria. Conventional wisdom says that standards are difficult to translate into computer code because they do not present clear operational mechanisms to follow. If that wisdom holds, one could reasonably doubt that legal automation will ever get off the ground. Conventional wisdom, however, fails to account for the interpretive freedom that standards provide. Their murkiness makes them a fertile ground for the growth of competing explanations of their legal meaning. Some of those readings might be more rule-like than others. Proponents of automation will likely be drawn to those rule-like interpretations, so long as they are compatible enough with existing law. This complex dynamic between computer-friendliness and legal interpretation makes it troublesome for legislators to identify the variable and fixed costs of automation. This Article aims to shed light on this relationship by focusing our attention on a quintessential legal standard at the center of our legal system—the Reasonably Prudent Person Test. Here, I explain how automation proponents might be tempted by fringe, formulaic interpretations of the test, such as Averageness, because they bring comparatively low innovation costs. With time, however, technological advancement will likely drive down innovation costs, and mainstream interpretations, like Conventionalism, could find favor again. Regardless of the interpretation that proponents favor, though, an unavoidable fixed cost looms: by replacing the jurors who apply the test with a machine, they will eliminate a long-valued avenue for participatory and deliberative democracy
    • …
    corecore