4,576 research outputs found
What Europe Knows and Thinks About Algorithms Results of a Representative Survey. Bertelsmann Stiftung eupinions February 2019
We live in an algorithmic world. Day by day, each of us is affected by decisions that algorithms make for and about
us – generally without us being aware of or consciously perceiving this. Personalized advertisements in social
media, the invitation to a job interview, the assessment of our creditworthiness – in all these cases, algorithms
already play a significant role – and their importance is growing, day by day.
The algorithmic revolution in our daily lives undoubtedly brings with it great opportunities. Algorithms are masters
at handling complexity. They can manage huge amounts of data quickly and efficiently, processing it consistently
every time. Where humans reach their cognitive limits, find themselves making decisions influenced by the day’s
events or feelings, or let themselves be influenced by existing prejudices, algorithmic systems can be used to
benefit society. For example, according to a study by the Expert Council of German Foundations on Integration and
Migration, automotive mechatronic engineers with Turkish names must submit about 50 percent more applications
than candidates with German names before being invited to an in-person job interview (Schneider, Yemane and
Weinmann 2014). If an algorithm were to make this decision, such discrimination could be prevented. However,
automated decisions also carry significant risks: Algorithms can reproduce existing societal discrimination and
reinforce social inequality, for example, if computers, using historical data as a basis, identify the male gender as
a labor-market success factor, and thus systematically discard job applications from woman, as recently took place
at Amazon (Nickel 2018)
Uber Effort: The Production of Worker Consent in Online Ride Sharing Platforms
The rise of the online gig economy alters ways of working. Mediated by algorithmically programmed mobile apps, platforms such as Uber and Lyft allow workers to work by driving and completing rides at any time or in any place that the drivers choose. This hybrid form of labor in an online gig economy which combines independent contract work with computer-mediated work differs from traditional manufacturing jobs in both its production activity and production relations. Through nine interviews with Lyft/Uber drivers, I found that workers’ consent, which was first articulated by Michael Burawoy in the context of the manufacturing economy, is still present in the work of the online gig economy in post-industrial capitalism. Workers willingly engage in the on-demand work not only to earn money but also to play a learning game motivated by the ambiguity of the management system, in which process they earn a sense of self-satisfaction and an illusion of autonomous control. This research points to the important role of technology in shaping contemporary labor process and suggests the potential mechanism which produces workers’ consent in technology-driven workplaces
Monetizing Explainable AI: A Double-edged Sword
Algorithms used by organizations increasingly wield power in society as they
decide the allocation of key resources and basic goods. In order to promote
fairer, juster, and more transparent uses of such decision-making power,
explainable artificial intelligence (XAI) aims to provide insights into the
logic of algorithmic decision-making. Despite much research on the topic,
consumer-facing applications of XAI remain rare. A central reason may be that a
viable platform-based monetization strategy for this new technology has yet to
be found. We introduce and describe a novel monetization strategy for fusing
algorithmic explanations with programmatic advertising via an explanation
platform. We claim the explanation platform represents a new,
socially-impactful, and profitable form of human-algorithm interaction and
estimate its potential for revenue generation in the high-risk domains of
finance, hiring, and education. We then consider possible undesirable and
unintended effects of monetizing XAI and simulate these scenarios using
real-world credit lending data. Ultimately, we argue that monetizing XAI may be
a double-edged sword: while monetization may incentivize industry adoption of
XAI in a variety of consumer applications, it may also conflict with the
original legal and ethical justifications for developing XAI. We conclude by
discussing whether there may be ways to responsibly and democratically harness
the potential of monetized XAI to provide greater consumer access to
algorithmic explanations
No Consumer Is an Island—Relational Disclosure as a Regulatory Strategy to Advance Consumer Protection Against Microtargeting
Presently, most business-to-consumer interaction uses consumer profiling to elaborate and deliver personalized products and services. It has been observed that these practices can be welfare-enhancing if properly regulated. At the same time, risks related to their abuses are present and significant, and it is no surprise that in recent times, personalization has found itself at the centre of the scholarly and regulatory debate. Within currently existing and forthcoming regulations, a common perspective can be found: given the capacity of microtargeting to potentially undermine consumers’ autonomy, the success of the regulatory intervention depends primarily on people being aware of the personality dimension being targeted. Yet, existing disclosures are based on an individualized format, focusing solely on the relationship between the professional operator and its counterparty; this approach operates in contrast to sociological studies that consider interaction and observation of peers to be essential components of decision making. A consideration of this “relational dimension” of decision making is missing both in consumer protection and in the debate on personalization. This article defends that consumers’ awareness and understanding of personalization and its consequences could be improved significantly if information was to be offered according to a relational format; accordingly, it reports the results of a study conducted in the streaming service market, showing that when information is presented in a relational format, people’s knowledge and awareness about profiling and microtargeting are significantly increased. The article further claims the potential of relational disclosure as a general paradigm for advancing consumer protection
Ideating XAI: An Exploration of User’s Mental Models of an AI-Driven Recruitment System Using a Design Thinking Approach
Artificial Intelligence (AI) is playing an important role in society including how vital, often life changing decisions are made. For this reason, interest in Explainable Artificial Intelligence (XAI) has grown in recent years as a means of revealing the processes and operations contained within what is often described as a black box, an often-opaque system whose decisions are difficult to understand by the end user. This paper presents the results of a design thinking workshop with 20 participants (computer science and graphic design students) where we sought to investigate users\u27 mental models when interacting with AI systems. Using two personas, participants were asked to empathise with two end users of an AI driven recruitment system, identify pain points in a user’s experience and ideate on possible solutions to these pain points. These tasks were used to explore the user’s understanding of AI systems, the intelligibility of AI systems and how the inner workings of these systems might be explained to end users. We discovered that visual feedback, analytics, and comparisons, feature highlighting in conjunction with factual, counterfactual and principal reasoning explanations could be used to improve user’s mental models of AI systems
Improving fairness in machine learning systems: What do industry practitioners need?
The potential for machine learning (ML) systems to amplify social inequities
and unfairness is receiving increasing popular and academic attention. A surge
of recent work has focused on the development of algorithmic tools to assess
and mitigate such unfairness. If these tools are to have a positive impact on
industry practice, however, it is crucial that their design be informed by an
understanding of real-world needs. Through 35 semi-structured interviews and an
anonymous survey of 267 ML practitioners, we conduct the first systematic
investigation of commercial product teams' challenges and needs for support in
developing fairer ML systems. We identify areas of alignment and disconnect
between the challenges faced by industry practitioners and solutions proposed
in the fair ML research literature. Based on these findings, we highlight
directions for future ML and HCI research that will better address industry
practitioners' needs.Comment: To appear in the 2019 ACM CHI Conference on Human Factors in
Computing Systems (CHI 2019
Stakeholder Perspectives on the Ethics of AI in Distance-Based Higher Education
Increasingly, Artificial Intelligence (AI) is having an impact on distance-based higher education, where it is revealing multiple ethical issues. However, to date, there has been limited research addressing the perspectives of key stakeholders about these developments. The study presented in this paper sought to address this gap by investigating the perspectives of three key groups of stakeholders in distance-based higher education: students, teachers, and institutions. Empirical data collected in two workshops and a survey helped identify what concerns these stakeholders had about the ethics of AI in distance-based higher education. A theoretical framework for the ethics of AI in education was used to analyse that data and helped identify what was missing. In this exploratory study, there was no attempt to prioritise issues as more, or less, important. Instead, the value of the study reported in this paper derives from (a) the breadth and detail of the issues that have been identified, and (b) their categorisation in a unifying framework. Together these provide a foundation for future research and may also usefully inform future institutional implementation and practice
- …