38,996 research outputs found

    Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For

    Get PDF
    Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered

    Trends and issues in community telecare in the United Kingdom

    Get PDF

    Artificial intelligence and UK national security: Policy considerations

    Get PDF
    RUSI was commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI. The findings are based on in-depth consultation with stakeholders from across the UK national security community, law enforcement agencies, private sector companies, academic and legal experts, and civil society representatives. This was complemented by a targeted review of existing literature on the topic of AI and national security. The research has found that AI offers numerous opportunities for the UK national security community to improve efficiency and effectiveness of existing processes. AI methods can rapidly derive insights from large, disparate datasets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, use of AI could give rise to additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. For this reason, enhanced policy and guidance is needed to ensure the privacy and human rights implications of national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data

    Setting Standards for Fair Information Practice in the U.S. Private Sector

    Get PDF
    The confluence of plans for an Information Superhighway, actual industry self-regulatory practices, and international pressure dictate renewed consideration of standard setting for fair information practices in the U.S. private sector. The legal rules, industry norms, and business practices that regulate the treatment of personal information in the United States are organized in a wide and dispersed manner. This Article analyzes how these standards are established in the U.S. private sector. Part I argues that the U.S. standards derive from the influence of American political philosophy on legal rule making and a preference for dispersed sources of information standards. Part II examines the aggregation of legal rules, industry norms, and business practice from these various decentralized sources. Part III ties the deficiencies back to the underlying U.S. philosophy and argues that the adherence to targeted standards has frustrated the very purposes of the narrow, ad hoc regulatory approach to setting private sector standards. Part IV addresses the irony that European pressure should force the United States to revisit the setting of standards for the private sector

    Innovation from user experience in Living Labs: revisiting the ‘innovation factory’-concept with a panel-based and user-centered approach

    Get PDF
    This paper focuses on the problem of facilitating sustainable innovation practices with a user-centered approach. We do so by revisiting the knowledge-brokering cycle and Hargadon and Sutton’s ideas on building an ‘innovation factory’ within the light of current Living Lab-practices. Based on theoretical as well as practical evidence from a case study analysis of the LeYLab-Living Lab, it is argued that Living Labs with a panel-based approach can act as innovation intermediaries where innovation takes shape through actual user experience in real-life environments, facilitating all four stages within the knowledge-brokering cycle. This finding is also in line with the recently emerging Quadruple Helix-model for innovation, stressing the crucial role of the end-user as a stakeholder throughout the whole innovation process
    • 

    corecore