127 research outputs found
Machines Like Me: A Proposal on the Admissibility of Artificially Intelligent Expert Testimony
With the rapidly expanding sophistication of artificial intelligence systems, their reliability, and cost-effectiveness for solving problems, the current trend of admitting testimony based on artificially intelligent (AI) systems is only likely to grow. In that context, it is imperative for us to ask what rules of evidence judges today should use relating to such evidence. To answer that question, we provide an in-depth review of expert systems, machine learning systems, and neural networks. Based on that analysis, we contend that evidence from only certain types of AI systems meet the requirements for admissibility, while other systems do not. The break in admissible/inadmissible AI evidence is a function of the opaqueness of the underlying computational methodology of the AI system and the court’s ability to assess that methodology. The admission of AI evidence also requires us to navigate pitfalls including the difficulty of explaining AI systems’ methodology and issues as to the right to confront witnesses. Based on our analysis, we offer several policy proposals that would address weaknesses or lack of clarity in the current system. First, in light of the long-standing concern that jurors would allow expertise to overcome their own assessment of the evidence and blindly agree with the “infallible” result of advanced-computing AI, we propose that jury instruction commissions, judicial panels, circuits, or other parties who draft instructions consider adopting a cautionary instruction for AI-based evidence. Such an instruction should remind jurors that the AI-based evidence is solely one part of the analysis, the opinions so generated are only as good as the underlying analytical methodology, and ultimately, the decision to accept or reject the evidence, in whole or in part, should remain with the jury alone. Second, as we have concluded that the admission of AI-based evidence depends largely on the computational methodology underlying the analysis, we propose for AI evidence to be admissible, the underlying methodology must be transparent because the judicial assessment of AI technology relies on the ability to understand how it functions
Onerous Disabilities And Burdens: An Empirical Study Of The Bar Examination’s Disparate Impact On Applicants From Communities Of Color
This Article provides the results of the most comprehensive and detailed analysis of the correlation between bar passage and race and ethnicity. It provides the first proof of racially disparate outcomes of the bar exam, both for first-time and ultimate bar passage, across jurisdictions and within law schools. Using data from 63 public law schools, we found that first-time bar examinees from Communities of Color underperform White examinees by, on average, 13.41 percentage points. While the gap closes when looking at ultimate bar passage, there is still a difference, on average, of 9.09 percentage points. The validity of these results are supported through our use of t-test statistical analysis and a regression analysis. Under the Civil Rights Act, a difference of 20% would be evidence of adverse impact creating a cause of action. As White examinees pass the first time at about an 85% rate, a 17-percentage-point difference meets the 20% requirement—something Black examinees, unfortunately, meet and something Asian examinees almost meet. Historically, this kind of difference in the bar examination was attributed to differences in the entering credentials of the various races—implying that examinees from Communities of Color are less well qualified than White examinees. Our results demonstrate that this explanation is incorrect. Because our dataset is an intra-school (within the school) dataset, we are comparing the bar results of White examinees with examinees from Communities of Color who both have similar entering credentials and receive the same legal education. In that context, race should not be correlated with the bar passage rate—if differing credentials are the cause of the differing bar pass rates. But as we show, those differences in bar pass remain. It is time to act. Bar Examiners must re-examine the bar exam and determine how race is impeding its ability to properly measure an examinee’s competence. This need to act is all the more vital given the coming changes to the 2026 bar examination
Examining the Bar Exam: An Empirical Analysis of Racial Bias in the Uniform Bar Examination
The legal profession is among the least diverse in the United States. Given continuing issues of systemic racism, the central position that the justice system occupies in society, and the vital role that lawyers play in that system, it is incumbent upon legal professionals to identify and remedy the causes of this lack of diversity. This Article seeks to understand how the bar examination—the final hurdle to entering the profession— contributes to this dearth of diversity. Using publicly available data, we analyze whether the ethnic makeup of a law school’s entering class correlates to the school’s first-time bar passage rates on the Uniform Bar Examination (UBE). We find that higher proportions of Black and Hispanic students in a law school’s entering class are associated with lower first-time bar passage rates for that school in its reported UBE jurisdictions three years later. This effect persists after controlling for other potentially causal factors like undergraduate grade-point average (UGPA), law school admission test (LSAT) score, geographic region, or law school tier. Moreover, the results are statistically robust at a p-value of 0.01 (indicating just a 1% chance that the results are due to random variation in the data). Because these are school-level results, they may not fully account for relevant factors identifiable only in student-level data. As a result, we argue that follow-up study using data relating to individual students is necessary to fully understand why the UBE produces racially and ethnically disparate results
The Kids Are Definitely Not All Right: An Empirical Study Establishing a Statistically Significant Negative Relationship Between Receiving Accommodations in Law School and Passing the Bar Exam
26 pagesUsing data gathered from sixty public law schools relating to the years 2019, 2020, and 2021, this Article demonstrates that there is a statistically significant negative correlation between the percentage of students in a school who receive accommodations and the school’s first-time bar passage rate. In other words, this study shows that as the percentage of accommodated students in a law school increased, its bar passage rate decreased. This Article establishes a prima facie case that something is wrong with the accommodation granting process and argues that state board of bar examiners should provide more data and transparency on examinee accommodations
Exploratory pilot testing of the psychometric properties of the person engagement index instrument among older, community-dwelling adults
The objective of this paper was to evaluate the psychometric properties of the Person Engagement Index with community dwelling older adults and determine the factors that impact this population’s capacity to engage in healthcare. This non-experimental pilot evaluation of the psychometrics of the Person Engagement Index was performed in a convenience sample of 100 community-dwelling older adults. Exploratory factor analysis was conducted using dimension reduction to determine the underlying structure of a person’s capacity to engage in healthcare. Results indicated good internal consistency with Cronbach’s alpha=.882 for the overall scale. Exploratory factor analysis with varimax rotation was conducted resulting in a five-factor solution. Four of the five subscales exceeded Cronbach’s alpha \u3e .70 threshold for internal consistency. Cronbach’s alpha results for the five domains were: (Knowledge of Healthcare Status) =.886, (Proactive Approach to Healthcare) =.780, (Motivation to Manage Healthcare) =.742, (Psychosocial Support for Healthcare) =.658 and (Technology Use in Healthcare) =.796. Results suggest that the Person Engagement Index instrument is a valid and reliable instrument to measure a person’s capacity to engage in healthcare among community dwelling older adults. Testing in different settings with other populations and over time is warranted to further explore the reliability and validity of the Person Engagement Index for different subgroups and its sensitivity to changes in health status that may impact a person’s capacity to engage in care
Expanding CubeSat Capabilities with a Low Cost Transceiver
CubeSats have developed rapidly over the past decade with the advent of a containerized deployer system and ever increasing launch opportunities. These satellites have moved from an educational tool to teach students about engineering challenges associated with satellite design, to systems that are conducting cutting edge earth, space and solar science. Early variants of the CubeSat had limited functionality and lacked sophisticated attitude control, deployable solar arrays and propulsion. This is no longer the case and as CubeSats mature, such systems are becoming commercially available. The result is a small satellite with sufficient power and pointing capabilities to support a high rate communication system. Communications systems have matured along with other CubeSat subsystems. Originally developed from amateur radio systems, CubeSats have generally operated in the VHF and UHF bands at data rates below 10 kbps (kilobits per second). More recently higher rate UHF systems have been developed, however these systems require a large collecting area on the ground to close the communications link at 3 Mbps (megabits per second). Efforts to develop systems that operate with similar throughput at S-Band (2-4 GHz (gigaherz)) and C-Band (4-8 GHz (gigaherz)) have also recently evolved. In this paper we outline an effort to develop a high rate CubeSat communication system that is compatible with the NASA Near Earth Network and can be accommodated by a CubeSat. The system will include a 200 kbps (kilobits per second) S-Band receiver and a 12.5 Mbps (megabits per second).X-Band transmitter. This paper will focus on our design approach and initial results associated with the 12.5 Mbps (megabits per second) X-band transmitter
Expanding CubeSat Capabilities with a Low Cost Transceiver
CubeSats have developed rapidly over the past decade with the advent of a containerized deployer system and ever increasing launch opportunities. These satellites have moved from an educational tool to teach students about engineering challenges associated with satellite design, to systems that are conducting cutting edge earth, space and solar science. Early variants of the CubeSat had limited functionality and lacked sophisticated attitude control, deployable solar arrays and propulsion. This is no longer the case and as CubeSats mature, such systems are becoming commercially available. The result is a small satellite with sufficient power and pointing capabilities to support a high rate communication system. Communications systems have matured along with other CubeSat subsystems. Originally developed from amateur radio systems, CubeSats have generally operated in the VHF and UHF bands at data rates below 10kbps. More recently higher rate UHF systems have been developed, however these systems require a large collecting area on the ground to close the communications link at 3Mbps. Efforts to develop systems that operate with similar throughput at S-Band (2-4 GHz) and C-Band (4-8 GHz) have also recently evolved. In this paper we outline an effort to develop a high rate CubeSat communication system that is compatible with the NASA Near Earth Network and can be accommodated by a CubeSat. The system will include a 200kbps S-Band receiver and a 12.5Mbps X-Band transmitter. This paper will focus on our design approach and initial results associated with the 12.5Mbps X-Band transmitter
Incorporating New Technologies Into Toxicity Testing and Risk Assessment: Moving From 21st Century Vision to a Data-Driven Framework
Based on existing data and previous work, a series of studies is proposed as a basis toward a pragmatic early step in transforming toxicity testing. These studies were assembled into a data-driven framework that invokes successive tiers of testing with margin of exposure (MOE) as the primary metric. The first tier of the framework integrates data from high-throughput in vitro assays, in vitro-to-in vivo extrapolation (IVIVE) pharmacokinetic modeling, and exposure modeling. The in vitro assays are used to separate chemicals based on their relative selectivity in interacting with biological targets and identify the concentration at which these interactions occur. The IVIVE modeling converts in vitro concentrations into external dose for calculation of the point of departure (POD) and comparisons to human exposure estimates to yield a MOE. The second tier involves short-term in vivo studies, expanded pharmacokinetic evaluations, and refined human exposure estimates. The results from the second tier studies provide more accurate estimates of the POD and the MOE. The third tier contains the traditional animal studies currently used to assess chemical safety. In each tier, the POD for selective chemicals is based primarily on endpoints associated with a proposed mode of action, whereas the POD for nonselective chemicals is based on potential biological perturbation. Based on the MOE, a significant percentage of chemicals evaluated in the first 2 tiers could be eliminated from further testing. The framework provides a risk-based and animal-sparing approach to evaluate chemical safety, drawing broadly from previous experience but incorporating technological advances to increase efficiency
- …