38,996 research outputs found
Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For
Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individualsâ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a âright to an explanationâ has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic âblack boxâ to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core âalgorithmic war storiesâ that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as âmeaningful information about the logic of processingâ may not be provided by the kind of ML âexplanationsâ computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, âsubject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a âright to an explanationâ in the GDPR may be at best distracting, and at worst nurture a new kind of âtransparency fallacy.â But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered
Recommended from our members
A proposed nutrient density score that includes food groups and nutrients to better align with dietary guidance.
Current research on diets and health focuses on composite food patterns and their likely impact on health outcomes. The Dietary Guidelines for Americans (DGA) have likewise adopted a more food group-based approach. By contrast, most nutrient profiling (NP) models continue to assess nutrient density of individual foods, based on a small number of individual nutrients. Nutrients to encourage have included protein, fiber, and a wide range of vitamins and minerals. Nutrients to limit are typically saturated fats, total or added sugars, and sodium. Because current NP models may not fully capture the healthfulness of foods, there is a case for advancing a hybrid NP approach that takes both nutrients and desirable food groups and food ingredients into account. Creating a nutrient- and food-based NP model may provide a more integrated way of assessing a foods nutrient density. Hybrid nutrient density scores will provide for a better alignment between NP models and the DGA, a chief instrument of food and nutrition policy in the United States. Such synergy may lead ultimately to improved dietary guidance, sound nutrition policy, and better public health
Artificial intelligence and UK national security: Policy considerations
RUSI was commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI. The findings are based on in-depth consultation with stakeholders from across the UK national security community, law enforcement agencies, private sector companies, academic and legal experts, and civil society representatives. This was complemented by a targeted review of existing literature on the topic of AI and national security.
The research has found that AI offers numerous opportunities for the UK national security community to improve efficiency and effectiveness of existing processes. AI methods can rapidly derive insights from large, disparate datasets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, use of AI could give rise to additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. For this reason, enhanced policy and guidance is needed to ensure the privacy and human rights implications of national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data
Recommended from our members
Data standardization
With data rapidly becoming the lifeblood of the global economy, the ability to improve its use significantly affects both social and private welfare. Data standardization is key to facilitating and improving the use of data when data portability and interoperability are needed. Absent data standardization, a âTower of Babelâ of different databases may be created, limiting synergetic knowledge production. Based on interviews with data scientists, this Article identifies three main technological obstacles to data portability and interoperability: metadata uncertainties, data transfer obstacles, and missing data. It then explains how data standardization can remove at least some of these obstacles and lead to smoother data flows and better machine learning. The Article then identifies and analyzes additional effects of data standardization. As shown, data standardization has the potential to support a competitive and distributed data collection ecosystem and lead to easier policing in cases where rights are infringed or unjustified harms are created by data-fed algorithms. At the same time, increasing the scale and scope of data analysis can create negative externalities in the form of better profiling, increased harms to privacy, and cybersecurity harms. Standardization also has implications for investment and innovation, especially if lock-in to an inefficient standard occurs. The Article then explores whether market-led standardization initiatives can be relied upon to increase welfare, and the role governmental-facilitated data standardization should play, if at all
Setting Standards for Fair Information Practice in the U.S. Private Sector
The confluence of plans for an Information Superhighway, actual industry self-regulatory practices, and international pressure dictate renewed consideration of standard setting for fair information practices in the U.S. private sector. The legal rules, industry norms, and business practices that regulate the treatment of personal information in the United States are organized in a wide and dispersed manner. This Article analyzes how these standards are established in the U.S. private sector. Part I argues that the U.S. standards derive from the influence of American political philosophy on legal rule making and a preference for dispersed sources of information standards. Part II examines the aggregation of legal rules, industry norms, and business practice from these various decentralized sources. Part III ties the deficiencies back to the underlying U.S. philosophy and argues that the adherence to targeted standards has frustrated the very purposes of the narrow, ad hoc regulatory approach to setting private sector standards. Part IV addresses the irony that European pressure should force the United States to revisit the setting of standards for the private sector
Innovation from user experience in Living Labs: revisiting the âinnovation factoryâ-concept with a panel-based and user-centered approach
This paper focuses on the problem of facilitating sustainable innovation practices with a user-centered approach. We do so by revisiting the knowledge-brokering cycle and Hargadon and Suttonâs ideas on building an âinnovation factoryâ within the light of current Living Lab-practices. Based on theoretical as well as practical evidence from a case study analysis of the LeYLab-Living Lab, it is argued that Living Labs with a panel-based approach can act as innovation intermediaries where innovation takes shape through actual user experience in real-life environments, facilitating all four stages within the knowledge-brokering cycle. This finding is also in line with the recently emerging Quadruple Helix-model for innovation, stressing the crucial role of the end-user as a stakeholder throughout the whole innovation process
- âŠ