8,473 research outputs found
Improving fairness in machine learning systems: What do industry practitioners need?
The potential for machine learning (ML) systems to amplify social inequities
and unfairness is receiving increasing popular and academic attention. A surge
of recent work has focused on the development of algorithmic tools to assess
and mitigate such unfairness. If these tools are to have a positive impact on
industry practice, however, it is crucial that their design be informed by an
understanding of real-world needs. Through 35 semi-structured interviews and an
anonymous survey of 267 ML practitioners, we conduct the first systematic
investigation of commercial product teams' challenges and needs for support in
developing fairer ML systems. We identify areas of alignment and disconnect
between the challenges faced by industry practitioners and solutions proposed
in the fair ML research literature. Based on these findings, we highlight
directions for future ML and HCI research that will better address industry
practitioners' needs.Comment: To appear in the 2019 ACM CHI Conference on Human Factors in
Computing Systems (CHI 2019
Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For
Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered
Perspectives on credit scoring and fair mortgage lending: article two
Credit scoring systems ; Mortgages
The Intuitive Appeal of Explainable Machines
Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself
FAIR EVA: Bringing institutional multidisciplinary repositories into the FAIR picture
The FAIR Principles are a set of good practices to improve the
reproducibility and quality of data in an Open Science context. Different sets
of indicators have been proposed to evaluate the FAIRness of digital objects,
including datasets that are usually stored in repositories or data portals.
However, indicators like those proposed by the Research Data Alliance are
provided from a high-level perspective that can be interpreted and they are not
always realistic to particular environments like multidisciplinary
repositories. This paper describes FAIR EVA, a new tool developed within the
European Open Science Cloud context that is oriented to particular data
management systems like open repositories, which can be customized to a
specific case in a scalable and automatic environment. It aims to be adaptive
enough to work for different environments, repository software and disciplines,
taking into account the flexibility of the FAIR Principles. As an example, we
present DIGITAL.CSIC repository as the first target of the tool, gathering the
particular needs of a multidisciplinary institution as well as its
institutional repository
Regulatory Q & A
Regulatory Q&A discusses a variety of topics including payday lending, notifications regarding private mortgage insurance (PMI), individual development accounts (IDAs), and other issues.Regulation Z: Truth in Lending ; Regulation C: Home Mortgage Disclosure ; Regulation H: Membership of State Banking Institutions in the Federal Reserve System ; Regulation BB: Community Reinvestment ; Regulation E: Electronic Fund Transfers ; Regulation D: Reserve Requirements of Depository Institutions
The Technological Emergence of AutoML: A Survey of Performant Software and Applications in the Context of Industry
With most technical fields, there exists a delay between fundamental academic
research and practical industrial uptake. Whilst some sciences have robust and
well-established processes for commercialisation, such as the pharmaceutical
practice of regimented drug trials, other fields face transitory periods in
which fundamental academic advancements diffuse gradually into the space of
commerce and industry. For the still relatively young field of
Automated/Autonomous Machine Learning (AutoML/AutonoML), that transitory period
is under way, spurred on by a burgeoning interest from broader society. Yet, to
date, little research has been undertaken to assess the current state of this
dissemination and its uptake. Thus, this review makes two primary contributions
to knowledge around this topic. Firstly, it provides the most up-to-date and
comprehensive survey of existing AutoML tools, both open-source and commercial.
Secondly, it motivates and outlines a framework for assessing whether an AutoML
solution designed for real-world application is 'performant'; this framework
extends beyond the limitations of typical academic criteria, considering a
variety of stakeholder needs and the human-computer interactions required to
service them. Thus, additionally supported by an extensive assessment and
comparison of academic and commercial case-studies, this review evaluates
mainstream engagement with AutoML in the early 2020s, identifying obstacles and
opportunities for accelerating future uptake
Society-in-the-Loop: Programming the Algorithmic Social Contract
Recent rapid advances in Artificial Intelligence (AI) and Machine Learning
have raised many questions about the regulatory and governance mechanisms for
autonomous machines. Many commentators, scholars, and policy-makers now call
for ensuring that algorithms governing our lives are transparent, fair, and
accountable. Here, I propose a conceptual framework for the regulation of AI
and algorithmic systems. I argue that we need tools to program, debug and
maintain an algorithmic social contract, a pact between various human
stakeholders, mediated by machines. To achieve this, we can adapt the concept
of human-in-the-loop (HITL) from the fields of modeling and simulation, and
interactive machine learning. In particular, I propose an agenda I call
society-in-the-loop (SITL), which combines the HITL control paradigm with
mechanisms for negotiating the values of various stakeholders affected by AI
systems, and monitoring compliance with the agreement. In short, `SITL = HITL +
Social Contract.'Comment: (in press), Ethics of Information Technology, 201
- …