14 research outputs found

    Is public accountability possible in algorithmic policymaking? The case for a public watchdog

    Get PDF
    Despite algorithms becoming an increasingly important tool for policymakers, little is known about how they are used in practice and how they work, even amongst the experts tasked with using them. Drawing on research into the use of algorithmic models in the UK and Dutch governments, Daan Kolkman argues that the inherent complexity of algorithms renders attempts to make them transparent difficult and that to achieve public accountability for the role they play in society a dedicated watchdog is required

    F**k the algorithm?: what the world can learn from the UK’s A-level grading fiasco

    Get PDF
    The A-level grading fiasco in the UK led to public outrage over algorithmic bias. This is a well-established problem that data professionals have sought to address through making their algorithms more explainable. However, Dr Daan Kolkman argues that the emergence of a “critical audience” in the A-level grading fiasco poses a model for a more effective means of countering bias and intellectual lock-in in the development of algorithms

    Is firm growth random? A machine learning perspective

    Get PDF
    This study contributes to the firm growth debate by applying machine learning. We compare a prominent machine learning technique – random forest analysis (RFA) – to traditional regression in terms of their goodness-of-fit on a dataset of 168,055 firms from Belgium and the Netherlands. For each of these firms, we have one to six years of historical data involving demographic and financial information. The data show high variation in firm growth rates, which is difficult to capture with traditional linear regression (R2 in the range of 0.05–0.06). The RFA fares three to four times better, achieving a much higher goodness-of-fit (R2 of 0.16–0.23). RFA indicates that perhaps firm growth is less random than suggested by traditional regression analysis. Generally, given the modest selection of variables in our dataset, this demonstrates that machine learning can be of value to firm growth research

    The (in)credibility of algorithmic models to non-experts

    Get PDF
    The rapid development and dissemination of data analysis techniques permits the creation of ever more intricate algorithmic models. Such models are simultaneously the vehicle and outcome of quantification practices and embody a worldview with associated norms and values. A set of specialist skills is required to create, use, or interpret algorithmic models. The mechanics of an algorithmic model may be hard to comprehend for experts and can be virtually incomprehensible to non-experts. This is of consequence because such black boxing can introduce power asymmetries and may obscure bias. This paper explores the practices through which experts and non-experts determine the credibility of algorithmic models. It concludes that (1) transparency to (non-)experts is at best problematic and at worst unattainable; (2) authoritative models may come to dictate what types of policies are considered feasible; (3) several of the advantages attributed to the use of quantifications do not hold in policy making contexts

    The usefulness of algorithmic models in policy making

    No full text
    Governments increasingly use algorithmic models to inform their policy making process. Many suggest that employing such quantifications will lead to more efficient, more effective or otherwise better quality policy making. Yet, it remains unclear to what extent these benefits materialize and if so, how they are brought about. This paper draws on the sociology and policy science literature to study how algorithmic models, a particular type of quantification, are used in policy analysis. It presents the outcomes of 38 unstructured interviews with data scientists, policy analysts, and policy makers that work with algorithmic models in government. Based on an in-depth analysis of these interviews, I conclude that the usefulness of algorithmic models in policy analysis is best understood in terms of the commensurability of these quantifications. However, these broad communicative and organizational benefits can only be brought about if algorithmic models are handled with care. Otherwise, they may propagate bias, exclude particular social groups, and will entrench existing worldviews

    Justitia ex machina: The impact of an AI system on legal decision-making and discretionary authority

    No full text
    Governments increasingly use algorithms to inform or supplant decision-making. Artificial Intelligence systems in particular are considered objective, consistent and efficient decision-makers, but have also been shown to be fallible. Furthermore, the adoption of artificial intelligence (AI) in government is fraught with challenges which are only partly understood and rarely studied in practice. In this paper, we draw on science and technology studies and human computer interaction and report on a critical case study of the development and use of an AI system for processing traffic violation appeal at a Dutch court. Although much empirical work on algorithms in practice is primarily observational in nature, we employ a canonical action research approach and actively participate in the development of the AI system. We draw on data collected in the form of interviews, observations, documents and a user-experiment. Based on this material we provide: 1. An in-depth empirical account of the tensions between street-level bureaucrats, screen-level bureaucrats and street-level algorithms; 2. An analysis of the differences between decisions made by, with and without the AI system and find that use of the AI systems impacts decisions made by legal experts; 3. A confirmation of earlier work that finds AI systems can best be applied in support of legal-decision making and demonstrate how the decision-making process of the traffic violation cases may mitigate some of the risks of algorithmic decision-making

    Towards estimating happiness using social sensing: Perspectives on organizational social network analysis

    No full text
    Social sensing provides many opportunities for observing human behavior utilizing objective (sensor) measurements. This paper describes an approach for analyzing organizational social networks capturing face-to-face contacts between individuals. Furthermore, we outline perspectives and scenarios for an extended analysis in order to estimate happiness in the context of organizational social networks
    corecore