5,328 research outputs found

    The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning

    Full text link
    The nascent field of fair machine learning aims to ensure that decisions guided by algorithms are equitable. Over the last several years, three formal definitions of fairness have gained prominence: (1) anti-classification, meaning that protected attributes---like race, gender, and their proxies---are not explicitly used to make decisions; (2) classification parity, meaning that common measures of predictive performance (e.g., false positive and false negative rates) are equal across groups defined by the protected attributes; and (3) calibration, meaning that conditional on risk estimates, outcomes are independent of protected attributes. Here we show that all three of these fairness definitions suffer from significant statistical limitations. Requiring anti-classification or classification parity can, perversely, harm the very groups they were designed to protect; and calibration, though generally desirable, provides little guarantee that decisions are equitable. In contrast to these formal fairness criteria, we argue that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce. Such a strategy, while not universally applicable, often aligns well with policy objectives; notably, this strategy will typically violate both anti-classification and classification parity. In practice, it requires significant effort to construct suitable risk estimates. One must carefully define and measure the targets of prediction to avoid retrenching biases in the data. But, importantly, one cannot generally address these difficulties by requiring that algorithms satisfy popular mathematical formalizations of fairness. By highlighting these challenges in the foundation of fair machine learning, we hope to help researchers and practitioners productively advance the area

    The Heterogeneity of Implicit Bias

    Get PDF
    The term 'implicit bias' has very swiftly been incorporated into philosophical discourse. Our aim in this paper is to scrutinise the phenomena that fall under the rubric of implicit bias. The term is often used in a rather broad sense, to capture a range of implicit social cognitions, and this is useful for some purposes. However, we here articulate some of the important differences between phenomena identified as instances of implicit bias. We caution against ignoring these differences: it is likely they have considerable significance, not least for the sorts of normative recommendations being made concerning how to mitigate the bad effects of implicit bias

    A Better Approach to Resolving Variable Selection Uncertainty in Meta Analysis for Benefits Transfer

    Get PDF
    Because original high-quality non-market valuation studies can be expensive, perhaps prohibitively so, benefits transfer (BT) approaches are often used for valuing, e.g., the outputs of multifunctional agriculture. Here we focus on the use of BT functions, a preferred method, and address an under-appreciated problem – variable selection uncertainty – and demonstrate a conceptually superior method of resolving it. We show that the standard method of value-function BT, using the full estimated model, may generate BT values that are too sensitive to insignificant variables, whereas models reduced by backward elimination of insignificant variables pay no attention to insignificant variables that may in fact have some influence on values. Rather than searching for the best single model for BT, Bayesian model averaging (BMA) is attentive to all of the variables that are a priori relevant, but uses posterior model probabilities to give systematically lower weight to less significant variables. We estimate a full value model for wetlands in the US, and then calculate BT values from the full model, a reduced model, and by BMA. Variable selection uncertainty is exemplified by regional variables for wetland location. Predicted values from the full model are quite sensitive to region; reduced models pay no attention to regional variables; and the BMA predictions are attentive to region but give it relatively low weight. However, the suite of insignificant RHS variables, taken together, have non-trivial influence on BT values. BMA predicted values, like values from reduced models, have much narrower confidence intervals than values calculated from the full model.Research Methods/ Statistical Methods,

    Using Biomedical Technologies to Inform Economic Modeling: Challenges and Opportunities for Improving Analysis of Environmental Policies

    Get PDF
    Advances in biomedical technology have irrevocably jarred open the black box of human decision making, offering social scientists the potential to validate, reject, refine and redefine the individual models of resource allocation that form the foundation of modern economics. In this paper we (1) provide a comprehensive overview of the biomedical methods that may be harnessed by economists and other social scientists to better understand the economic decision making process; (2) review research that utilizes these biomedical methods to illuminate fundamental aspects of the decision making process; and (3) summarize evidence from this literature concerning the basic tenants of neoclassical utility that are often invoked for positive welfare analysis of environmental policies. We conclude by raising questions about the future path of policy related research and the role biomedical technologies will play in defining that path.neuroeconomics, neuroscience, brain imaging, genetics, welfare economics, utility theory, biology, decision making, preferences, Institutional and Behavioral Economics, Research Methods/ Statistical Methods, D01, D03, D6, D87,

    Why Simpler Computer Simulation Models Can Be Epistemically Better for Informing Decisions

    Get PDF
    For computer simulation models to usefully inform climate risk management, uncertainties in model projections must be explored and characterized. Because doing so requires running the model many ti..

    Policy and planning for large infrastructure projects : problems, causes, cures

    Get PDF
    This paper focuses on problems and their causes and cures in policy and planning for large infrastructure projects. First, it identifies as the main problem in major infrastructure development pervasive misinformation about the costs, benefits, and risks involved. A consequence of misinformation is massive cost overruns, benefit shortfalls, and waste. Second, the paper explores the causes of misinformation and finds that political-economic explanations best account for the available evidence: planners and promoters deliberately misrepresent costs, benefits, and risks in order to increase the likelihood that it is their projects, and not the competition's, that gain approval and funding. This results in the"survival of the unfittest,"where often it is not the best projects that are built, but the most misrepresented ones. Finally, the paper presents measures for reforming policy and planning for large infrastructure projects, with a focus on better planning methods and changed governance structures, the latter being more important.ICT Policy and Strategies,Economic Theory&Research,Science Education,Scientific Research&Science Parks,Poverty Monitoring&Analysis
    • …
    corecore