18 research outputs found

    The Intuitive Appeal of Explainable Machines

    Get PDF
    Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself

    Clock division as a power saving strategy in a system constrained by high transmission frequency and low data rate

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 63).Systems are often restricted to have higher transmission frequency than required by their data rates. Possible constraints include channel attenuation, power requirements, and backward compatibility. As a result these systems have unused band- width, leading to inefficient use of power. In this thesis, I propose to slow the internal operating frequency of a cochlear implant receiver in order to reduce the internal power consumption by more than a factor of ten. I have created a new data encoding scheme, called "N-[pi] Shift Encoding", which makes clock division a viable solution. This clock division technique can be applied to other similarly constrained systems.by Andrew D. Selbst.M.Eng

    Towards a Critical Race Methodology in Algorithmic Fairness

    Full text link
    We examine the way race and racial categories are adopted in algorithmic fairness frameworks. Current methodologies fail to adequately account for the socially constructed nature of race, instead adopting a conceptualization of race as a fixed attribute. Treating race as an attribute, rather than a structural, institutional, and relational phenomenon, can serve to minimize the structural aspects of algorithmic unfairness. In this work, we focus on the history of racial categories and turn to critical race theory and sociological work on race and ethnicity to ground conceptualizations of race for fairness research, drawing on lessons from public health, biomedical research, and social survey research. We argue that algorithmic fairness researchers need to take into account the multidimensionality of race, take seriously the processes of conceptualizing and operationalizing race, focus on social processes which produce racial inequality, and consider perspectives of those most affected by sociotechnical systems.Comment: Conference on Fairness, Accountability, and Transparency (FAT* '20), January 27-30, 2020, Barcelona, Spai

    Disparate Impact in Big Data Policing

    Full text link
    Data-driven decision systems are taking over. Noinstitution in society seems immune from theenthusiasm that automated decision-making generates,including-and perhaps especially-the police. Policedepartments are increasingly deploying data miningtechniques to predict, prevent, and investigate crime.But all data mining systems have the potential foradverse impacts on vulnerable communities, andpredictive policing is no different. Determiningindividuals\u27 threat levels by reference to commercialand social data can improperly link dark skin to higherthreat levels or to greater suspicion of havingcommitted a particularcrime. Crime mapping basedon historical data can lead to more arrests for nuisancecrimes in neighborhoods primarilypopulated by peopleof color. These effects are an artifact of the technologyitself, and will likely occur even assuming good faith onthe part of the police departments using it. Meanwhile,predictive policing is sold in part as a neutral methodto counteract unconscious biases when it is not simply sold to cash-strapped departments as a more cost-efficient way to do policing.The degree to which predictive policing systems havethese discriminatory results is unclear to the publicand to the police themselves, largely because there is noincentive in place for a department focused solely on crime control to spend resources asking the question.This is a problem for which existing law does notprovide a solution. Finding that neither the typicalconstitutional modes of police regulation nor ahypothetical anti-discriminationlaw would provide asolution, this Article turns toward a new regulatoryproposal centered on algorithmicimpact statements. Modeled on the environmental impact statements ofthe National Environmental Policy Act, algorithmicimpact statements would require police departments toevaluate the efficacy and potential discriminatoryeffects of all available choices for predictive policingtechnologies. The regulation would also allow thepublic to weigh in through a notice-and-commentprocess. Such a regulation would fill the knowledgegap that makes future policy discussions about thecosts and benefits of predictive policing all butimpossible. Being primarily procedural, it would notnecessarily curtail a department determined todiscriminate, but by forcing departments to considerthe question and allowing society to understand thescope of the problem, it is a first step towards solvingthe problem and determining whether furtherintervention is requir

    Disparate Impact in Big Data Policing

    No full text
    Data-driven decision systems are taking over. Noinstitution in society seems immune from theenthusiasm that automated decision-making generates,including-and perhaps especially-the police. Policedepartments are increasingly deploying data miningtechniques to predict, prevent, and investigate crime.But all data mining systems have the potential foradverse impacts on vulnerable communities, andpredictive policing is no different. Determiningindividuals\u27 threat levels by reference to commercialand social data can improperly link dark skin to higherthreat levels or to greater suspicion of havingcommitted a particularcrime. Crime mapping basedon historical data can lead to more arrests for nuisancecrimes in neighborhoods primarilypopulated by peopleof color. These effects are an artifact of the technologyitself, and will likely occur even assuming good faith onthe part of the police departments using it. Meanwhile,predictive policing is sold in part as a neutral methodto counteract unconscious biases when it is not simply sold to cash-strapped departments as a more cost-efficient way to do policing.The degree to which predictive policing systems havethese discriminatory results is unclear to the publicand to the police themselves, largely because there is noincentive in place for a department focused solely on crime control to spend resources asking the question.This is a problem for which existing law does notprovide a solution. Finding that neither the typicalconstitutional modes of police regulation nor ahypothetical anti-discriminationlaw would provide asolution, this Article turns toward a new regulatoryproposal centered on algorithmicimpact statements. Modeled on the environmental impact statements ofthe National Environmental Policy Act, algorithmicimpact statements would require police departments toevaluate the efficacy and potential discriminatoryeffects of all available choices for predictive policingtechnologies. The regulation would also allow thepublic to weigh in through a notice-and-commentprocess. Such a regulation would fill the knowledgegap that makes future policy discussions about thecosts and benefits of predictive policing all butimpossible. Being primarily procedural, it would notnecessarily curtail a department determined todiscriminate, but by forcing departments to considerthe question and allowing society to understand thescope of the problem, it is a first step towards solvingthe problem and determining whether furtherintervention is requir

    The Intuitive Appeal of Explainable Machines

    No full text
    Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself

    The Fallacy of AI Functionality

    Full text link
    Deployed AI systems often do not work. They can be constructed haphazardly, deployed indiscriminately, and promoted deceptively. However, despite this reality, scholars, the press, and policymakers pay too little attention to functionality. This leads to technical and policy solutions focused on "ethical" or value-aligned deployments, often skipping over the prior question of whether a given system functions, or provides any benefits at all. To describe the harms of various types of functionality failures, we analyze a set of case studies to create a taxonomy of known AI functionality issues. We then point to policy and organizational responses that are often overlooked and become more readily available once functionality is drawn into focus. We argue that functionality is a meaningful AI policy challenge, operating as a necessary first step towards protecting affected communities from algorithmic harm
    corecore