19,530 research outputs found

    Algorithmic Debugging of Real-World Haskell Programs: Deriving Dependencies from the Cost Centre Stack

    Get PDF
    Existing algorithmic debuggers for Haskell require a transformation of all modules in a program, even libraries that the user does not want to debug and which may use language features not supported by the debugger. This is a pity, because a promising ap- proach to debugging is therefore not applicable to many real-world programs. We use the cost centre stack from the Glasgow Haskell Compiler profiling environment together with runtime value observations as provided by the Haskell Object Observation Debugger (HOOD) to collect enough information for algorithmic debugging. Program annotations are in suspected modules only. With this technique algorithmic debugging is applicable to a much larger set of Haskell programs. This demonstrates that for functional languages in general a simple stack trace extension is useful to support tasks such as profiling and debugging

    Coordinating views for data visualisation and algorithmic profiling

    Get PDF
    A number of researchers have designed visualisation systems that consist of multiple components, through which data and interaction commands flow. Such multistage (hybrid) models can be used to reduce algorithmic complexity, and to open up intermediate stages of algorithms for inspection and steering. In this paper, we present work on aiding the developer and the user of such algorithms through the application of interactive visualisation techniques. We present a set of tools designed to profile the performance of other visualisation components, and provide further functionality for the exploration of high dimensional data sets. Case studies are provided, illustrating the application of the profiling modules to a number of data sets. Through this work we are exploring ways in which techniques traditionally used to prepare for visualisation runs, and to retrospectively analyse them, can find new uses within the context of a multi-component visualisation system

    Regulating algorithmic discrimination through adjudication:the Court of Justice of the European Union on discrimination in algorithmic profiling based on PNR data

    Get PDF
    This article considers the Court of Justice of the European Union's assessment and regulation of risks of discrimination in the context of algorithmic profiling based on Passenger Name Records data (PNR data). On the June 21, 2022 the court delivered a landmark judgment in Ligue des Droits Humains pertaining to discrimination and algorithmic profiling in a border security context. The CJEU identifies and seeks to regulate several risks of discrimination in relation to the automated processing of PNR data, the manual review of the results of this processing, and the resulting decisions taken by competent authorities. It interpreted whether the PNR Directive that lays down the legal basis for such profiling was compatible with the fundamental right to privacy, the right to data protection, and the right to non-discrimination. In its judgment, the CJEU seems to insufficiently assess various risks of discrimination. In particular, it overlooks risks relating to data quality and representativeness, automation bias, and practical difficulties in identifying discrimination. The judges also seem to prescribe safeguards against discrimination without guidance as to how to ensure their uniform and effective implementation. Such shortcomings can be observed in relation to ensuring the non-discriminatory nature of law enforcement databases, preventing indirectly discriminatory profiling practices based on collected PNR data, and configuring effective human-in-the-loop and transparency safeguards. This landmark judgement represents an important step in addressing algorithmic discrimination through CJEU adjudication. However, the CJEUs inability to sufficiently address the risks of discrimination in the context of algorithmic profiling based on the PNR Directive raises a broader concern. Namely, whether the CJEU is adequately equipped to combat algorithmic discrimination in the broader realm of European border security where algorithmic profiling is becoming increasingly commonplace

    Risk scores for long-term unemployment and the assignment to job search counseling

    Get PDF
    This paper analyses how risk profiling is used to assign unemployed job seekers to job search counseling in Flanders, Belgium. We compare algorithmic selection to self-selection and selection by job search counselors. We discuss practical challenges for the implementation of risk profiling and highlight avenues for further research. We find that algorithmic assignment is used for only a small fraction of the sample and that job search counselors appear to have valuable private information on job seekers' reemployment prospects beyond what is captured by the algorithmic risk score

    Algoritmisesti muodostunut "Me" : Yhteisöön kuuluminen digitaalisen median aikakaudella

    Get PDF
    This essay examines affordances of algorithmic profiling interfaces and their potential for forming collectives. These interfaces form groups through a logic of inclusion and exclusion analogous to linguistic we-discourses, resulting in an “algorithmic we-interpellation.” Aided by algorithmic profiling, social media interfaces are able to interpellate large groups to stand behind a cause and even to mobilize them to topple a government. However, and despite their promise to the contrary, such interfaces struggle to facilitate the kind of “shared foundation” that would be necessary for collectives in which belonging means, not inclusion into a collected set, but being part of a shared process.Peer reviewe

    Visualisation techniques for users and designers of layout algorithms

    Get PDF
    Visualisation systems consisting of a set of components through which data and interaction commands flow have been explored by a number of researchers. Such hybrid and multistage algorithms can be used to reduce overall computation time, and to provide views of the data that show intermediate results and the outputs of complementary algorithms. In this paper we present work on expanding the range and variety of such components, with two new techniques for analysing and controlling the performance of visualisation processes. While the techniques presented are quite different, they are unified within HIVE: a visualisation system based upon a data-flow model and visual programming. Embodied within this system is a framework for weaving together our visualisation components to better afford insight into data and also deepen understanding of the process of the data's visualisation. We describe the new components and offer short case studies of their application. We demonstrate that both analysts and visualisation designers can benefit from a rich set of components and integrated tools for profiling performance

    European Union regulations on algorithmic decision-making and a "right to explanation"

    Get PDF
    We summarize the potential impact that the European Union's new General Data Protection Regulation will have on the routine use of machine learning algorithms. Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which "significantly affect" users. The law will also effectively create a "right to explanation," whereby a user can ask for an explanation of an algorithmic decision that was made about them. We argue that while this law will pose large challenges for industry, it highlights opportunities for computer scientists to take the lead in designing algorithms and evaluation frameworks which avoid discrimination and enable explanation.Comment: presented at 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, N

    Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For

    Get PDF
    Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered

    Taste and the algorithm

    Get PDF
    Today, a consistent part of our everyday interaction with art and aesthetic artefacts occurs through digital media, and our preferences and choices are systematically tracked and analyzed by algorithms in ways that are far from transparent. Our consumption is constantly documented, and then, we are fed back through tailored information. We are therefore witnessing the emergence of a complex interrelation between our aesthetic choices, their digital elaboration, and also the production of content and the dynamics of creative processes. All are involved in a process of mutual influences, and are partially determined by the invisible guiding hand of algorithms. With regard to this topic, this paper will introduce some key issues concerning the role of algorithms in aesthetic domains, such as taste detection and formation, cultural consumption and production, and showing how aesthetics can contribute to the ongoing debate about the impact of today’s “algorithmic culture”
    • …
    corecore