14 research outputs found
Latinas in the Legal Academy: Progress and Promise
The 2022 Inaugural Graciela Oliva Ìrez Latinas in the Legal Academy (âGO LILAâ) Workshop convened seventy-four outstanding and powerful Latina law professors and professional legal educators (collectively, âLatinas in the legal academy,â or âLILAsâ) to document and celebrate our individual and collective journeys and to grow stronger together. In this essay, we, four of the Latina law professors who helped to co-found the GO LILA Workshop, share what we learned about and from each other. We invite other LILAs to join our community and share their stories and journeys. We hope that the data and lessons that we share can inspire other Latinas to join the legal academy. We encourage law schools to honor the transformation that our presence and contributions have brought to legal education and scholarship and to join us in considering how our path forward can be even more impactful and sustaining
AI, on the Law of the Elephant: Toward Understanding Artificial Intelligence
Machine learning and other artificial intelligence (AI) systems are changing our world in profound, exponentially rapid, and likely irreversible ways.3 Although AI may be harnessed for great good, it is capable of and is doing great harm at scale to people, communities, societies, and democratic institutions. The dearth of AI governance leaves unchecked AIâs potentially existential risks. Whether sounding urgent alarm or merely jumping on the bandwagon, law scholars, law students, and lawyers at bar are contributing volumes of AI policy and legislative proposals, commentaries, doctrinal theories, and calls to corporate and international organizations for ethical AI leadership. Unfortunately, erroneous, incomplete, and overly simplistic treatments of AI technology undermine the utility of a significant portion of that literature. Moreover, many of those treatments are piecemeal, and those gaps produce barriers to the proper legal understanding of AI.
Profound concerns exist about AI and the actual and potential crises of societal, democratic, and individual harm that it causes or may cause in future. On the whole, the legal community is not currently equal to the task of addressing those concerns, lacking sufficient AI knowledge and technological competence, despite ethical mandates for diligence and competence. As a result, law and policy debates and subsequent actions may be fundamentally flawed or produce devastating unintended consequences because they relied upon erroneous, uninformed, or misconceived understandings of AI technologies, inputs, and processes. Like the elephant in the ancient Jain parable, the wise ones may conceive of only a fraction of the AI creature and some more or less blindly.
Now more than ever, lawyers need to be able to see around critically important corners. The general lack of understanding about AI technology robs the legal profession of that foresight. This state of affairs also raises significant ethical concerns. Worse, it undermines lawyersâ power, authority, and legitimacy to bring forward truly valid, meaningful ideas and solutions to prevent AI from becoming humanityâs apex predator.
This Article responds with several descriptive and theoretical contributions. As to its descriptive contributions, it aims to correct and augment the record about AI, particularly machine learning and its underlying technologies and processes. It endeavors to present a concisely and accessibly stated foundational, but sufficiently comprehensive, single-source explanation. The Article draws extensively from the scientific and technical literatures and undertakes an important interdisciplinary translational process by which to map the AI technical lexicon to legal terms of art and constructions in patent and other cases. Because their understanding is foundational, the Article drills down on three principal AI inputs: data, including data curation; statistical models; and algorithms. It then engages in illustrative issue-spotting within these AI factual frames, sketching out some of the many legal implications associated with those vital understandings.
Toward its theoretical contributions, the Article presents two conceptual sortings of AI and introduces a systems- and process-engineering-inspired taxonomy of AI. First, it categorizes AI by the degree of human involvement in and, conversely, the degree of AI autonomy in AI-mediated decision-making. Second, it conceptualizes AI as being static or dynamic. Those distinctions are vital to AIâs potential for harm, meaningful accountability, and, ultimately, the proper prioritization of AI governance efforts. Third, the Article briefly introduces a taxonomy that conceptualizes AI as a human-machine enterprise made up of series of processes. By perceiving âthe whole of the AI elephant,â the role of human decision-making and its limits may be understood, and the human-machine enterprise that is AI and its constituent processes may be deconstructed, comprehended, and framed for subsequent scholarship, doctrinal and procedural analyses, and law and policy developments. With these, the Article hopes to help inform and empower lawyers to improve the security, justness, and well-being of people in the increasingly algorithmic world
AI, on the Law of the Elephant: Toward Understanding Artificial Intelligence
Machine learning and other artificial intelligence (AI) systems are changing our world in profound, exponentially rapid, and likely irreversible ways.3 Although AI may be harnessed for great good, it is capable of and is doing great harm at scale to people, communities, societies, and democratic institutions. The dearth of AI governance leaves unchecked AIâs potentially existential risks. Whether sounding urgent alarm or merely jumping on the bandwagon, law scholars, law students, and lawyers at bar are contributing volumes of AI policy and legislative proposals, commentaries, doctrinal theories, and calls to corporate and international organizations for ethical AI leadership. Unfortunately, erroneous, incomplete, and overly simplistic treatments of AI technology undermine the utility of a significant portion of that literature. Moreover, many of those treatments are piecemeal, and those gaps produce barriers to the proper legal understanding of AI.
Profound concerns exist about AI and the actual and potential crises of societal, democratic, and individual harm that it causes or may cause in future. On the whole, the legal community is not currently equal to the task of addressing those concerns, lacking sufficient AI knowledge and technological competence, despite ethical mandates for diligence and competence. As a result, law and policy debates and subsequent actions may be fundamentally flawed or produce devastating unintended consequences because they relied upon erroneous, uninformed, or misconceived understandings of AI technologies, inputs, and processes. Like the elephant in the ancient Jain parable, the wise ones may conceive of only a fraction of the AI creature and some more or less blindly.
Now more than ever, lawyers need to be able to see around critically important corners. The general lack of understanding about AI technology robs the legal profession of that foresight. This state of affairs also raises significant ethical concerns. Worse, it undermines lawyersâ power, authority, and legitimacy to bring forward truly valid, meaningful ideas and solutions to prevent AI from becoming humanityâs apex predator.
This Article responds with several descriptive and theoretical contributions. As to its descriptive contributions, it aims to correct and augment the record about AI, particularly machine learning and its underlying technologies and processes. It endeavors to present a concisely and accessibly stated foundational, but sufficiently comprehensive, single-source explanation. The Article draws extensively from the scientific and technical literatures and undertakes an important interdisciplinary translational process by which to map the AI technical lexicon to legal terms of art and constructions in patent and other cases. Because their understanding is foundational, the Article drills down on three principal AI inputs: data, including data curation; statistical models; and algorithms. It then engages in illustrative issue-spotting within these AI factual frames, sketching out some of the many legal implications associated with those vital understandings.
Toward its theoretical contributions, the Article presents two conceptual sortings of AI and introduces a systems- and process-engineering-inspired taxonomy of AI. First, it categorizes AI by the degree of human involvement in and, conversely, the degree of AI autonomy in AI-mediated decision-making. Second, it conceptualizes AI as being static or dynamic. Those distinctions are vital to AIâs potential for harm, meaningful accountability, and, ultimately, the proper prioritization of AI governance efforts. Third, the Article briefly introduces a taxonomy that conceptualizes AI as a human-machine enterprise made up of series of processes. By perceiving âthe whole of the AI elephant,â the role of human decision-making and its limits may be understood, and the human-machine enterprise that is AI and its constituent processes may be deconstructed, comprehended, and framed for subsequent scholarship, doctrinal and procedural analyses, and law and policy developments. With these, the Article hopes to help inform and empower lawyers to improve the security, justness, and well-being of people in the increasingly algorithmic world
Featured Speaker, Artificial Intelligence: A Framework for Legal Understanding
To enrich participantsâ experience in the Artificial Intelligence and the Law Symposium, Professor Loza de Siles offers a cornerstone lecture and discussion in three parts toward understanding artificial intelligence (âAIâ) and framing conceptions about AI in the law. First, she introduces key AI technical terms and map those to legal terms of art and constructions. (citation omitted). Second, Professor Loza de Siles presents two legal taxonomies by which to categorize AI types and uses and then sketches out some of the legal implications associated with these distinctions. Third, she offers a holistic taxonomy of AI system and use that is informed by concepts in systems and process engineering and product marketing. Using this conceptualization, Professor Loza de Siles suggests that AI becomes a process to be deconstructed, comprehended, and framed for legal analysis and doctrine and policy development
Soft Law for Unbiased and Nondiscriminatory Artificial Intelligence
Here, the task undertaken is to consider and illustrate whether soft law can be an effective and credible means by which to ensure that artificial intelligence (AI) and its uses are unbiased and nondiscriminatory.
This is a pressing need. The current fact is that many AI systems are designed, intended, or other-wise to operate in biased and discriminatory ways. The most demonstrable bias and discrimination flaw in AI today is systemic racism, gender bias, and other corrosive societal ills that are baked into the data on which AI operates. Bias in, bias out. That logic is inescapable