491,427 research outputs found
Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For
Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered
The Unfulfilled Potential of Data-Driven Decision Making in Agile Software Development
With the general trend towards data-driven decision making (DDDM),
organizations are looking for ways to use DDDM to improve their decisions.
However, few studies have looked into the practitioners view of DDDM, in
particular for agile organizations. In this paper we investigated the
experiences of using DDDM, and how data can improve decision making. An emailed
questionnaire was sent out to 124 industry practitioners in agile software
developing companies, of which 84 answered. The results show that few
practitioners indicated a widespread use of DDDM in their current decision
making practices. The practitioners were more positive to its future use for
higher-level and more general decision making, fairly positive to its use for
requirements elicitation and prioritization decisions, while being less
positive to its future use at the team level. The practitioners do see a lot of
potential for DDDM in an agile context; however, currently unfulfilled
An evaluation of common explanations for the impact of income inequality on life satisfaction
This study explains how income inequality affects life satisfaction in Europe. Although research about the impact of income inequality on life satisfaction is inconclusive, authors suggest several reasons for its potential impact. In the literature section we discuss three types of explanations for the impact of inequality: pure aversion for inequality, aversion for inequality motivated by how an individual is personally affected by inequality and preferences for equality of opportunities. In order to test these explanations, we examine how three corresponding variables, respectively attitude towards redistribution, income and perceived mobility, interact with both actual and perceived income inequality in multilevel analyses using data from the European Values Survey. Our results reveal that there are significant differences between how people are affected by actual income inequality and how they are affected by perceived income inequality. The impact of perceived income inequality on life satisfaction depends on perceived mobility in society and income, while the impact of actual income inequality solely depends on perceived mobility. We conclude that traditional explanations often erroneously assume that people correctly assess income inequality. Moreover these explanations are more capable of clarifying the effect of perceived income inequality on life satisfaction than that of actual inequality
Mapping domain characteristics influencing Analytics initiatives: The example of Supply Chain Analytics
Purpose: Analytics research is increasingly divided by the domains Analytics is applied to. Literature offers little understanding whether aspects such as success factors, barriers and management of Analytics must be investigated domain-specific, while the execution of Analytics initiatives is similar across domains and similar issues occur. This article investigates characteristics of the execution of Analytics initiatives that are distinct in domains and can guide future research collaboration and focus. The research was conducted on the example of Logistics and Supply Chain Management and the respective domain-specific Analytics subfield of Supply Chain Analytics. The field of Logistics and Supply Chain Management has been recognized as early adopter of Analytics but has retracted to a midfield position comparing different domains.
Design/methodology/approach: This research uses Grounded Theory based on 12 semi-structured Interviews creating a map of domain characteristics based of the paradigm scheme of Strauss and Corbin.
Findings: A total of 34 characteristics of Analytics initiatives that distinguish domains in the execution of initiatives were identified, which are mapped and explained. As a blueprint for further research, the domain-specifics of Logistics and Supply Chain Management are presented and discussed.
Originality/value: The results of this research stimulates cross domain research on Analytics issues and prompt research on the identified characteristics with broader understanding of the impact on Analytics initiatives. The also describe the status-quo of Analytics. Further, results help managers control the environment of initiatives and design more successful initiatives.DFG, 414044773, Open Access Publizieren 2019 - 2020 / Technische Universität Berli
CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020
The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and
Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market,
technical, ethical and governance challenges posed by the intersection of AI and cybersecurity,
focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder
by design and composed of academics, industry players from various sectors, policymakers and civil
society.
The Task Force is currently discussing issues such as the state and evolution of the application of AI
in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics
between cyber attackers and defenders; the increasing need for sharing information on threats and
how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and
possible EU policy measures to ease the adoption of AI in cybersecurity in Europe.
As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics
Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and
makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed
at helping the public and the private sector in operationalising Trustworthy AI. The list is composed
of 131 items that are supposed to guide AI designers and developers throughout the process of
design, development, and deployment of AI, although not intended as guidance to ensure
compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a
revision that will be finalised in early 2020.
This report would like to contribute to this revision by addressing in particular the interplay between
AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how
the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental
Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks
are fundamentally different from traditional cyberattacks; whether they are compatible with
different risk levels; whether they are flexible enough in terms of clear/easy measurement,
implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles
for the industry.
The HLEG is a diverse group, with more than 50 members representing different stakeholders, such
as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of
producing a simple checklist for a complex issue. The public engagement exercise looks successful
overall in that more than 450 stakeholders have signed in and are contributing to the process.
The next sections of this report present the items listed by the HLEG followed by the analysis and
suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)
- …