7 research outputs found

    Study of bias and perceptions of merit in the high-tech labor market

    No full text
    Thesis: S.M., Massachusetts Institute of Technology, Department of Comparative Media Studies, 2015.Cataloged from PDF version of thesis.Includes bibliographical references (pages 177-183).In recent years, a significant amount of resources and attention has been directed at increasing the diversity of the hi-tech workforce in the United States. Generally speaking, the underrepresentation of minorities and women in tech has been understood as an "educational pipeline problem," - for a variety of reasons, these groups lack the social supports and resources needed to develop marketable technical literacies. In this thesis I complicate the educational pipeline narrative by taking a close look at the perspectives and practices of three different groups. First, I explore widespread assumptions and recruitment practices found in the tech industry, based on interviews I conducted with over a dozen leaders and founders of tech companies. I found that widespread notions of what merit looks like (in terms of prior work experience and educational pedigree) have given rise to insular hiring practices in tech. Second, I offer an in-depth examination of the risks and opportunities related to an emerging set of practices termed "algorithmic recruitment," which combines machine learning with big data sets in order to evaluate technical talent. Finally, I analyze the strategies adopted by a non-profit called CODE2040 in order to facilitate structural changes in how tech recruits talent to include a more diverse set of qualified applicants. I conclude by offering a more robust conceptualization of diversity and its value in the tech sector, as well as some specific ways to increase tech's diversity in the future.by Chelsea Barabas.S.M

    Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment

    Get PDF
    Actuarial risk assessments might be unduly perceived as a neutral way to counteract implicit bias and increase the fairness of decisions made at almost every juncture of the criminal justice system, from pretrial release to sentencing, parole and probation. In recent times these assessments have come under increased scrutiny, as critics claim that the statistical techniques underlying them might reproduce existing patterns of discrimination and historical biases that are reflected in the data.Much of this debate is centered around competing notions of fairness and predictive accuracy, resting on the contested use of variables that act as “proxies” for characteristics legally protected against discrimination, such as race and gender. We argue that a core ethical debate surrounding the use of regression in risk assessments is not simply one of bias or accuracy. Rather, it’s one of purpose. If machine learning is operationalized merely in the service of predicting individual future crime, then it becomes difficult to break cycles of criminalization that are driven by the iatrogenic effects of the criminal justice system itself. We posit that machine learning should not be used for prediction, but rather to surface covariates that are fed into a causal model for understanding the social, structural and psychological drivers of crime. We propose an alternative application of machine learning and causal inference away from predicting risk scores to risk mitigation

    Legal and ethical issues in the use of telepresence robots:best practices and toolkit

    No full text
    corecore