59,208 research outputs found
Introduction to Data Ethics
An Introduction to data ethics, focusing on questions of privacy and personal identity in the economic world as it is defined by big data technologies, artificial intelligence, and algorithmic capitalism.
Originally published in The Business Ethics Workshop, 3rd Edition, by Boston Acacdemic Publishing / FlatWorld Knowledge
Society-in-the-Loop: Programming the Algorithmic Social Contract
Recent rapid advances in Artificial Intelligence (AI) and Machine Learning
have raised many questions about the regulatory and governance mechanisms for
autonomous machines. Many commentators, scholars, and policy-makers now call
for ensuring that algorithms governing our lives are transparent, fair, and
accountable. Here, I propose a conceptual framework for the regulation of AI
and algorithmic systems. I argue that we need tools to program, debug and
maintain an algorithmic social contract, a pact between various human
stakeholders, mediated by machines. To achieve this, we can adapt the concept
of human-in-the-loop (HITL) from the fields of modeling and simulation, and
interactive machine learning. In particular, I propose an agenda I call
society-in-the-loop (SITL), which combines the HITL control paradigm with
mechanisms for negotiating the values of various stakeholders affected by AI
systems, and monitoring compliance with the agreement. In short, `SITL = HITL +
Social Contract.'Comment: (in press), Ethics of Information Technology, 201
Autonomous Systems as Legal Agents: Directly by the Recognition of Personhood or Indirectly by the Alchemy of Algorithmic Entities
Recommender systems and their ethical challenges
This article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review. The article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders—as opposed to just the receivers of a recommendation—in assessing the ethical impacts of a recommender system
What Europe Knows and Thinks About Algorithms Results of a Representative Survey. Bertelsmann Stiftung eupinions February 2019
We live in an algorithmic world. Day by day, each of us is affected by decisions that algorithms make for and about
us – generally without us being aware of or consciously perceiving this. Personalized advertisements in social
media, the invitation to a job interview, the assessment of our creditworthiness – in all these cases, algorithms
already play a significant role – and their importance is growing, day by day.
The algorithmic revolution in our daily lives undoubtedly brings with it great opportunities. Algorithms are masters
at handling complexity. They can manage huge amounts of data quickly and efficiently, processing it consistently
every time. Where humans reach their cognitive limits, find themselves making decisions influenced by the day’s
events or feelings, or let themselves be influenced by existing prejudices, algorithmic systems can be used to
benefit society. For example, according to a study by the Expert Council of German Foundations on Integration and
Migration, automotive mechatronic engineers with Turkish names must submit about 50 percent more applications
than candidates with German names before being invited to an in-person job interview (Schneider, Yemane and
Weinmann 2014). If an algorithm were to make this decision, such discrimination could be prevented. However,
automated decisions also carry significant risks: Algorithms can reproduce existing societal discrimination and
reinforce social inequality, for example, if computers, using historical data as a basis, identify the male gender as
a labor-market success factor, and thus systematically discard job applications from woman, as recently took place
at Amazon (Nickel 2018)
Recommended from our members
When users control the algorithms: Values expressed in practices on the twitter platform
Recent interest in ethical AI has brought a slew of values, including fairness, into conversations about technology design. Research in the area of algorithmic fairness tends to be rooted in questions of distribution that can be subject to precise formalism and technical implementation. We seek to expand this conversation to include the experiences of people subject to algorithmic classification and decision-making. By examining tweets about the “Twitter algorithm” we consider the wide range of concerns and desires Twitter users express. We find a concern with fairness (narrowly construed) is present, particularly in the ways users complain that the platform enacts a political bias against conservatives. However, we find another important category of concern, evident in attempts to exert control over the algorithm. Twitter users who seek control do so for a variety of reasons, many well justified. We argue for the need for better and clearer definitions of what constitutes legitimate and illegitimate control over algorithmic processes and to consider support for users who wish to enact their own collective choices
Curriculum Guidelines for Undergraduate Programs in Data Science
The Park City Math Institute (PCMI) 2016 Summer Undergraduate Faculty Program
met for the purpose of composing guidelines for undergraduate programs in Data
Science. The group consisted of 25 undergraduate faculty from a variety of
institutions in the U.S., primarily from the disciplines of mathematics,
statistics and computer science. These guidelines are meant to provide some
structure for institutions planning for or revising a major in Data Science
- …
