367 research outputs found

    Complementary network-based approaches for exploring genetic structure and functional connectivity in two vulnerable, endemic ground squirrels

    Get PDF
    The persistence of small populations is influenced by genetic structure and functional connectivity. We used two network-based approaches to understand the persistence of the northern Idaho ground squirrel (Urocitellus brunneus) and the southern Idaho ground squirrel (U. endemicus), two congeners of conservation concern. These graph theoretic approaches are conventionally applied to social or transportation networks, but here are used to study population persistence and connectivity. Population graph analyses revealed that local extinction rapidly reduced connectivity for the southern species, while connectivity for the northern species could be maintained following local extinction. Results from gravity models complemented those of population graph analyses, and indicated that potential vegetation productivity and topography drove connectivity in the northern species. For the southern species, development (roads) and small-scale topography reduced connectivity, while greater potential vegetation productivity increased connectivity. Taken together, the results of the two network-based methods (population graph analyses and gravity models) suggest the need for increased conservation action for the southern species, and that management efforts have been effective at maintaining habitat quality throughout the current range of the northern species. To prevent further declines, we encourage the continuation of management efforts for the northern species, whereas conservation of the southern species requires active management and additional measures to curtail habitat fragmentation. Our combination of population graph analyses and gravity models can inform conservation strategies of other species exhibiting patchy distributions

    Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search

    Full text link
    We present a framework for quantifying and mitigating algorithmic bias in mechanisms designed for ranking individuals, typically used as part of web-scale search and recommendation systems. We first propose complementary measures to quantify bias with respect to protected attributes such as gender and age. We then present algorithms for computing fairness-aware re-ranking of results. For a given search or recommendation task, our algorithms seek to achieve a desired distribution of top ranked results with respect to one or more protected attributes. We show that such a framework can be tailored to achieve fairness criteria such as equality of opportunity and demographic parity depending on the choice of the desired distribution. We evaluate the proposed algorithms via extensive simulations over different parameter choices, and study the effect of fairness-aware ranking on both bias and utility measures. We finally present the online A/B testing results from applying our framework towards representative ranking in LinkedIn Talent Search, and discuss the lessons learned in practice. Our approach resulted in tremendous improvement in the fairness metrics (nearly three fold increase in the number of search queries with representative results) without affecting the business metrics, which paved the way for deployment to 100% of LinkedIn Recruiter users worldwide. Ours is the first large-scale deployed framework for ensuring fairness in the hiring domain, with the potential positive impact for more than 630M LinkedIn members.Comment: This paper has been accepted for publication at ACM KDD 201

    Fair Inputs and Fair Outputs: The Incompatibility of Fairness in Privacy and Accuracy

    Get PDF
    Fairness concerns about algorithmic decision-making systems have been mainly focused on the outputs (e.g., the accuracy of a classifier across individuals or groups). However, one may additionally be concerned with fairness in the inputs. In this paper, we propose and formulate two properties regarding the inputs of (features used by) a classifier. In particular, we claim that fair privacy (whether individuals are all asked to reveal the same information) and need-to-know (whether users are only asked for the minimal information required for the task at hand) are desirable properties of a decision system. We explore the interaction between these properties and fairness in the outputs (fair prediction accuracy). We show that for an optimal classifier these three properties are in general incompatible, and we explain what common properties of data make them incompatible. Finally we provide an algorithm to verify if the trade-off between the three properties exists in a given dataset, and use the algorithm to show that this trade-off is common in real data

    Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment

    Full text link
    Automated data-driven decision making systems are increasingly being used to assist, or even replace humans in many settings. These systems function by learning from historical decisions, often taken by humans. In order to maximize the utility of these systems (or, classifiers), their training involves minimizing the errors (or, misclassifications) over the given historical data. However, it is quite possible that the optimally trained classifier makes decisions for people belonging to different social groups with different misclassification rates (e.g., misclassification rates for females are higher than for males), thereby placing these groups at an unfair disadvantage. To account for and avoid such unfairness, in this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates. We then propose intuitive measures of disparate mistreatment for decision boundary-based classifiers, which can be easily incorporated into their formulation as convex-concave constraints. Experiments on synthetic as well as real world datasets show that our methodology is effective at avoiding disparate mistreatment, often at a small cost in terms of accuracy.Comment: To appear in Proceedings of the 26th International World Wide Web Conference (WWW), 2017. Code available at: https://github.com/mbilalzafar/fair-classificatio

    Towards Guidelines for Assessing Qualities of Machine Learning Systems

    Full text link
    Nowadays, systems containing components based on machine learning (ML) methods are becoming more widespread. In order to ensure the intended behavior of a software system, there are standards that define necessary quality aspects of the system and its components (such as ISO/IEC 25010). Due to the different nature of ML, we have to adjust quality aspects or add additional ones (such as trustworthiness) and be very precise about which aspect is really relevant for which object of interest (such as completeness of training data), and how to objectively assess adherence to quality requirements. In this article, we present the construction of a quality model (i.e., evaluation objects, quality aspects, and metrics) for an ML system based on an industrial use case. This quality model enables practitioners to specify and assess quality requirements for such kinds of ML systems objectively. In the future, we want to learn how the term quality differs between different types of ML systems and come up with general guidelines for specifying and assessing qualities of ML systems.Comment: Has been accepted at the 13th International Conference on the Quality of Information and Communications Technology QUATIC2020 (https://2020.quatic.org/). QUATIC 2020 proceedings will be included in a volume of Springer CCIS Series (Communications in Computer and Information Science

    Fairness in Algorithmic Decision Making: An Excursion Through the Lens of Causality

    Full text link
    As virtually all aspects of our lives are increasingly impacted by algorithmic decision making systems, it is incumbent upon us as a society to ensure such systems do not become instruments of unfair discrimination on the basis of gender, race, ethnicity, religion, etc. We consider the problem of determining whether the decisions made by such systems are discriminatory, through the lens of causal models. We introduce two definitions of group fairness grounded in causality: fair on average causal effect (FACE), and fair on average causal effect on the treated (FACT). We use the Rubin-Neyman potential outcomes framework for the analysis of cause-effect relationships to robustly estimate FACE and FACT. We demonstrate the effectiveness of our proposed approach on synthetic data. Our analyses of two real-world data sets, the Adult income data set from the UCI repository (with gender as the protected attribute), and the NYC Stop and Frisk data set (with race as the protected attribute), show that the evidence of discrimination obtained by FACE and FACT, or lack thereof, is often in agreement with the findings from other studies. We further show that FACT, being somewhat more nuanced compared to FACE, can yield findings of discrimination that differ from those obtained using FACE.Comment: 7 pages, 2 figures, 2 tables.To appear in Proceedings of the International Conference on World Wide Web (WWW), 201

    Accountable Algorithms

    Get PDF
    Many important decisions historically made by people are now made by computers. Algorithms count votes, approve loan and credit card applications, target citizens or neighborhoods for police scrutiny, select taxpayers for IRS audit, grant or deny immigration visas, and more. The accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed to oversee human decisionmakers and often fail when applied to computers instead. For example, how do you judge the intent of a piece of software? Because automated decision systems can return potentially incorrect, unjustified, or unfair results, additional approaches are needed to make such systems accountable and governable. This Article reveals a new technological toolkit to verify that automated decisions comply with key standards of legal fairness. We challenge the dominant position in the legal literature that transparency will solve these problems. Disclosure of source code is often neither necessary (because of alternative techniques from computer science) nor sufficient (because of the issues analyzing code) to demonstrate the fairness of a process. Furthermore, transparency may be undesirable, such as when it discloses private information or permits tax cheats or terrorists to game the systems determining audits or security screening. The central issue is how to assure the interests of citizens, and society as a whole, in making these processes more accountable. This Article argues that technology is creating new opportunities—subtler and more flexible than total transparency—to design decisionmaking algorithms so that they better align with legal and policy objectives. Doing so will improve not only the current governance of automated decisions, but also—in certain cases—the governance of decisionmaking in general. The implicit (or explicit) biases of human decisionmakers can be difficult to find and root out, but we can peer into the “brain” of an algorithm: computational processes and purpose specifications can be declared prior to use and verified afterward. The technological tools introduced in this Article apply widely. They can be used in designing decisionmaking processes from both the private and public sectors, and they can be tailored to verify different characteristics as desired by decisionmakers, regulators, or the public. By forcing a more careful consideration of the effects of decision rules, they also engender policy discussions and closer looks at legal standards. As such, these tools have far-reaching implications throughout law and society. Part I of this Article provides an accessible and concise introduction to foundational computer science techniques that can be used to verify and demonstrate compliance with key standards of legal fairness for automated decisions without revealing key attributes of the decisions or the processes by which the decisions were reached. Part II then describes how these techniques can assure that decisions are made with the key governance attribute of procedural regularity, meaning that decisions are made under an announced set of rules consistently applied in each case. We demonstrate how this approach could be used to redesign and resolve issues with the State Department’s diversity visa lottery. In Part III, we go further and explore how other computational techniques can assure that automated decisions preserve fidelity to substantive legal and policy choices. We show how these tools may be used to assure that certain kinds of unjust discrimination are avoided and that automated decision processes behave in ways that comport with the social or legal standards that govern the decision. We also show how automated decisionmaking may even complicate existing doctrines of disparate treatment and disparate impact, and we discuss some recent computer science work on detecting and removing discrimination in algorithms, especially in the context of big data and machine learning. And lastly, in Part IV, we propose an agenda to further synergistic collaboration between computer science, law, and policy to advance the design of automated decision processes for accountabilit

    Estimated clinical outcomes and cost-effectiveness associated with provision of addiction treatment in US primary care clinics

    Get PDF
    IMPORTANCE: US primary care practitioners (PCPs) are the largest clinical workforce, but few provide addiction care. Primary care is a practical place to expand addiction services, including buprenorphine and harm reduction kits, yet the clinical outcomes and health care sector costs are unknown. OBJECTIVE: To estimate the long-term clinical outcomes, costs, and cost-effectiveness of integrated buprenorphine and harm reduction kits in primary care for people who inject opioids. DESIGN, SETTING, AND PARTICIPANTS: In this modeling study, the Reducing Infections Related to Drug Use Cost-Effectiveness (REDUCE) microsimulation model, which tracks serious injection-related infections, overdose, hospitalization, and death, was used to examine the following treatment strategies: (1) PCP services with external referral to addiction care (status quo), (2) PCP services plus onsite buprenorphine prescribing with referral to offsite harm reduction kits (BUP), and (3) PCP services plus onsite buprenorphine prescribing and harm reduction kits (BUP plus HR). Model inputs were derived from clinical trials and observational cohorts, and costs were discounted annually at 3%. The cost-effectiveness was evaluated over a lifetime from the modified health care sector perspective, and sensitivity analyses were performed to address uncertainty. Model simulation began January 1, 2021, and ran for the entire lifetime of the cohort. MAIN OUTCOMES AND MEASURES: Life-years (LYs), hospitalizations, mortality from sequelae (overdose, severe skin and soft tissue infections, and endocarditis), costs, and incremental cost-effectiveness ratios (ICERs). RESULTS: The simulated cohort included 2.25 million people and reflected the age and gender of US persons who inject opioids. Status quo resulted in 6.56 discounted LYs at a discounted cost of 203500perperson(95203 500 per person (95% credible interval, 203 000-222000).Eachstrategyextendeddiscountedlifeexpectancy:BUPby0.16yearsandBUPplusHRby0.17years.Comparedwithstatusquo,BUPplusHRreducedsequelaerelatedmortalityby33222 000). Each strategy extended discounted life expectancy: BUP by 0.16 years and BUP plus HR by 0.17 years. Compared with status quo, BUP plus HR reduced sequelae-related mortality by 33%. The mean discounted lifetime cost per person of BUP and BUP plus HR were more than that of the status quo strategy. The dominating strategy was BUP plus HR. Compared with status quo, BUP plus HR was cost-effective (ICER, 34 400 per LY). During a 5-year time horizon, BUP plus HR cost an individual PCP practice approximately $13 000. CONCLUSIONS AND RELEVANCE: This modeling study of integrated addiction service in primary care found improved clinical outcomes and modestly increased costs. The integration of addiction service into primary care practices should be a health care system priority

    Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach

    Full text link
    Explanations--a form of post-hoc interpretability--play an instrumental role in making systems accessible as AI continues to proliferate complex and sensitive sociotechnical systems. In this paper, we introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design. It develops a holistic understanding of "who" the human is by considering the interplay of values, interpersonal dynamics, and the socially situated nature of AI systems. In particular, we advocate for a reflective sociotechnical approach. We illustrate HCXAI through a case study of an explanation system for non-technical end-users that shows how technical advancements and the understanding of human factors co-evolve. Building on the case study, we lay out open research questions pertaining to further refining our understanding of "who" the human is and extending beyond 1-to-1 human-computer interactions. Finally, we propose that a reflective HCXAI paradigm-mediated through the perspective of Critical Technical Practice and supplemented with strategies from HCI, such as value-sensitive design and participatory design--not only helps us understand our intellectual blind spots, but it can also open up new design and research spaces.Comment: In Proceedings of HCI International 2020: 22nd International Conference On Human-Computer Interactio

    Simulated cost-effectiveness and long-term clinical outcomes of addiction care and antibiotic therapy strategies for patients with injection drug use-associated infective endocarditis

    Get PDF
    Importance: Emerging evidence supports the use of outpatient parenteral antimicrobial therapy (OPAT) and, in many cases, partial oral antibiotic therapy for the treatment of injection drug use-associated infective endocarditis (IDU-IE); however, long-term outcomes and cost-effectiveness remain unknown. Objective: To compare the added value of inpatient addiction care services and the cost-effectiveness and clinical outcomes of alternative antibiotic treatment strategies for patients with IDU-IE. Design, Setting, and Participants: This decision analytical modeling study used a validated microsimulation model to compare antibiotic treatment strategies for patients with IDU-IE. Model inputs were derived from clinical trials and observational cohort studies. The model included all patients with injection opioid drug use (N = 5 million) in the US who were eligible to receive OPAT either in the home or at a postacute care facility. Costs were annually discounted at 3%. Cost-effectiveness was evaluated from a health care sector perspective over a lifetime starting in 2020. Probabilistic sensitivity, scenario, and threshold analyses were performed to address uncertainty. Interventions: The model simulated 4 treatment strategies: (1) 4 to 6 weeks of inpatient intravenous (IV) antibiotic therapy along with opioid detoxification (usual care strategy), (2) 4 to 6 weeks of inpatient IV antibiotic therapy along with inpatient addiction care services that offered medication for opioid use disorder (usual care/addiction care strategy), (3) 3 weeks of inpatient IV antibiotic therapy along with addiction care services followed by OPAT (OPAT strategy), and (4) 3 weeks of inpatient IV antibiotic therapy along with addiction care services followed by partial oral antibiotic therapy (partial oral antibiotic strategy). Main Outcomes and Measures: Mean percentage of patients completing treatment for IDU-IE, deaths associated with IDU-IE, life expectancy (measured in life-years [LYs]), mean cost per person, and incremental cost-effectiveness ratios (ICERs). Results: All modeled scenarios were initialized with 5 million individuals (mean age, 42 years; range, 18-64 years; 70% male) who had a history of injection opioid drug use. The usual care strategy resulted in 18.63 LYs at a cost of 416570perperson,with77.6416 570 per person, with 77.6% of hospitalized patients completing treatment. Life expectancy was extended by each alternative strategy. The partial oral antibiotic strategy yielded the highest treatment completion rate (80.3%) compared with the OPAT strategy (78.8%) and the usual care/addiction care strategy (77.6%). The OPAT strategy was the least expensive at 412 150 per person. Compared with the OPAT strategy, the partial oral antibiotic strategy had an ICER of 163370perLY.IncreasingIDUIEtreatmentuptakeanddecreasingtreatmentdiscontinuationmadethepartialoralantibioticstrategymorecosteffectivecomparedwiththeOPATstrategy.WhenassumingthatallpatientswithIDUIEwereeligibletoreceivepartialoralantibiotictherapy,thestrategywascostsavingandresultedin0.0247additionaldiscountedLYs.Whentreatmentdiscontinuationwasdecreasedfrom3.30163 370 per LY. Increasing IDU-IE treatment uptake and decreasing treatment discontinuation made the partial oral antibiotic strategy more cost-effective compared with the OPAT strategy. When assuming that all patients with IDU-IE were eligible to receive partial oral antibiotic therapy, the strategy was cost-saving and resulted in 0.0247 additional discounted LYs. When treatment discontinuation was decreased from 3.30% to 2.65% per week, the partial oral antibiotic strategy was cost-effective compared with OPAT at the 100 000 per LY threshold. Conclusions and Relevance: In this decision analytical modeling study, incorporation of OPAT or partial oral antibiotic approaches along with addiction care services for the treatment of patients with IDU-IE was associated with increases in the number of people completing treatment, decreases in mortality, and savings in cost compared with the usual care strategy of providing inpatient IV antibiotic therapy alone
    corecore