43 research outputs found
The Intuitive Appeal of Explainable Machines
Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself
Unfair Artificial Intelligence: How FTC Intervention Can Overcome the Limitations of Discrimination Law
Clock division as a power saving strategy in a system constrained by high transmission frequency and low data rate
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 63).Systems are often restricted to have higher transmission frequency than required by their data rates. Possible constraints include channel attenuation, power requirements, and backward compatibility. As a result these systems have unused band- width, leading to inefficient use of power. In this thesis, I propose to slow the internal operating frequency of a cochlear implant receiver in order to reduce the internal power consumption by more than a factor of ten. I have created a new data encoding scheme, called "N-[pi] Shift Encoding", which makes clock division a viable solution. This clock division technique can be applied to other similarly constrained systems.by Andrew D. Selbst.M.Eng
Recommended from our members
Deconstructing Design Decisions: Why Courts Must Interrogate Machine Learning and Other Technologies
Recommended from our members
Big Data's Disparate Impact
Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with. Data is frequently imperfect in ways that allow these algorithms to inherit the prejudices of prior decision makers. In other cases, data may simply reflect the widespread biases that persist in society at large. In still others, data mining can discover surprisingly useful regularities that are really just preexisting patterns of exclusion and inequality. Unthinking reliance on data mining can deny historically disadvantaged and vulnerable groups full participation in society. Worse still, because the resulting discrimination is almost always an unintentional emergent property of the algorithm's use rather than a conscious choice by its programmers, it can be unusually hard to identify the source of the problem or to explain it to a court.
This Essay examines these concerns through the lens of American antidiscrimination law—more particularly, through Title VII's prohibition of discrimination in employment. In the absence of a demonstrable intent to discriminate, the best doctrinal hope for data mining's victims would seem to lie in disparate impact doctrine. Case law and the Equal Employment Opportunity Commission's Uniform Guidelines, though, hold that a practice can be justified as a business necessity when its outcomes are predictive of future employment outcomes, and data mining is specifically designed to find such statistical correlations. Unless there is a reasonably practical way to demonstrate that these discoveries are spurious, Title VII would appear to bless its use, even though the correlations it discovers will often reflect historic patterns of prejudice, others' discrimination against members of protected groups, or flaws in the underlying data.
Addressing the sources of this unintentional discrimination and remedying the corresponding deficiencies in the law will be difficult technically, difficult legally, and difficult politically. There are a number of practical limits to what can be accomplished computationally. For example, when discrimination occurs because the data being mined is itself a result of past intentional discrimination, there is frequently no obvious method to adjust historical data to rid it of this taint. Corrective measures that alter the results of the data mining after it is complete would tread on legally and politically disputed terrain. These challenges for reform throw into stark relief the tension between the two major theories underlying antidiscrimination law: anticlassification and antisubordination. Finding a solution to big data's disparate impact will require more than best efforts to stamp out prejudice and bias; it will require a wholesale reexamination of the meanings of discrimination and fairness.</p
Towards a Critical Race Methodology in Algorithmic Fairness
We examine the way race and racial categories are adopted in algorithmic
fairness frameworks. Current methodologies fail to adequately account for the
socially constructed nature of race, instead adopting a conceptualization of
race as a fixed attribute. Treating race as an attribute, rather than a
structural, institutional, and relational phenomenon, can serve to minimize the
structural aspects of algorithmic unfairness. In this work, we focus on the
history of racial categories and turn to critical race theory and sociological
work on race and ethnicity to ground conceptualizations of race for fairness
research, drawing on lessons from public health, biomedical research, and
social survey research. We argue that algorithmic fairness researchers need to
take into account the multidimensionality of race, take seriously the processes
of conceptualizing and operationalizing race, focus on social processes which
produce racial inequality, and consider perspectives of those most affected by
sociotechnical systems.Comment: Conference on Fairness, Accountability, and Transparency (FAT* '20),
January 27-30, 2020, Barcelona, Spai
