808 research outputs found
A Systematic Review of Ethical Concerns with Voice Assistants
Siri's introduction in 2011 marked the beginning of a wave of domestic voice
assistant releases, and this technology has since become commonplace in
consumer devices such as smartphones and TVs. But as their presence expands
there have also been a range of ethical concerns identified around the use of
voice assistants, such as the privacy implications of having devices that are
always recording and the ways that these devices are integrated into the
existing social order of the home. This has created a burgeoning area of
research across a range of fields including computer science, social science,
and psychology. This paper takes stock of the foundations and frontiers of this
work through a systematic literature review of 117 papers on ethical concerns
with voice assistants. In addition to analysis of nine specific areas of
concern, the review measures the distribution of methods and participant
demographics across the literature. We show how some concerns, such as privacy,
are operationalized to a much greater extent than others like accessibility,
and how study participants are overwhelmingly drawn from a small handful of
Western nations. In so doing we hope to provide an outline of the rich tapestry
of work around these concerns and highlight areas where current research
efforts are lacking
Machinelike or Humanlike? A Literature Review of Anthropomorphism in AI-Enabled Technology
Due to the recent proliferation of AI-enabled technology (AIET), the concept of anthropomorphism, human likeness in technology, has increasingly attracted researchers’ attention. Researchers have examined how anthropomorphism influences users’ perception, adoption, and continued use of AIET. However, researchers have yet to agree on how to conceptualize and operationalize anthropomorphism in AIET, which has resulted in inconsistent findings. A comprehensive understanding is thus needed of the current state of research on anthropomorphism in AIET contexts. To conduct an in-depth analysis of the literature on anthropomorphism, we reviewed 35 empirical studies focusing on conceptualizing and operationalizing AIET anthropomorphism, and its antecedents and consequences. Based on our analysis, we discuss potential research gaps and offer directions for future research
Perceiving Sociable Technology: Exploring the Role of Anthropomorphism and Agency Perception on Human-Computer Interaction (HCI)
With the arrival of personal assistants and other AI-enabled autonomous technologies, social interactions with smart devices have become a part of our daily lives. Therefore, it becomes increasingly important to understand how these social interactions emerge, and why users appear to be influenced by them. For this reason, I explore questions on what the antecedents and consequences of this phenomenon, known as anthropomorphism, are as described in the extant literature from fields ranging from information systems to social neuroscience. I critically analyze those empirical studies directly measuring anthropomorphism and those referring to it without a corresponding measurement. Through a grounded theory approach, I identify common themes and use them to develop models for the antecedents and consequences of anthropomorphism. The results suggest anthropomorphism possesses both conscious and non-conscious components with varying implications. While conscious attributions are shown to vary based on individual differences, non-conscious attributions emerge whenever a technology exhibits apparent reasoning such as through non-verbal behavior like peer-to-peer mirroring or verbal paralinguistic and backchanneling cues. Anthropomorphism has been shown to affect users’ self-perceptions, perceptions of the technology, how users interact with the technology, and the users’ performance. Examples include changes in a users’ trust on the technology, conformity effects, bonding, and displays of empathy. I argue these effects emerge from changes in users’ perceived agency, and their self- and social- identity similarly to interactions between humans. Afterwards, I critically examine current theories on anthropomorphism and present propositions about its nature based on the results of the empirical literature. Subsequently, I introduce a two-factor model of anthropomorphism that proposes how an individual anthropomorphizes a technology is dependent on how the technology was initially perceived (top-down and rational or bottom-up and automatic), and whether it exhibits a capacity for agency or experience. I propose that where a technology lays along this spectrum determines how individuals relates to it, creating shared agency effects, or changing the users’ social identity. For this reason, anthropomorphism is a powerful tool that can be leveraged to support future interactions with smart technologies
Virtual assistants in customer interface
This thesis covers use of virtual assistants from a user organization’s perspective, exploring
challenges and opportunities related to introducing virtual assistants to an organization’s
customer interface. Research related to virtual assistants is spread over many distinct fields of
research spanning several decades. However, widespread use of virtual assistants in
organizations customer interface is a relatively new and constantly evolving phenomenon.
Scientific research is lacking when it comes to current use of virtual assistants and user
organization’s considerations related to it.
A qualitative, semi-systematic literature review method is used to analyse progression of
research related to virtual assistants, aiming to identify major trends. Several fields of research
that cover virtual assistants from different perspectives are explored, focusing primarily on
Human-Computer Interaction and Natural Language Processing. Additionally, a case study of a
Finnish insurance company’s use of virtual assistants supports the literature review and helps
understand the user organization’s perspective. This thesis describes how key technologies have
progressed, gives insight on current issues that affect organizations and points out opportunities
related to virtual assistants in the future. Interviews related to the case study give a limited
understanding as to what challenges are currently at the forefront when it comes to using this
new technology in the insurance industry.
The case study and literature review clearly point out that use of virtual assistants is hindered
my various practical challenges. Some practical challenges related to making a virtual assistant
useful for an organization seem to be industry-specific, for example issues related to giving
advice about insurance products. Other challenges are more general, for example unreliability of
customer feedback. Different customer segments have different attitudes towards interacting
with virtual assistants, from positive to negative, making the technology a clearly polarizing
issue. However, customers in general seem to be becoming more accepting towards the
technology in the long term. More research is needed to understand future potential of virtual
assistants in customer interactions and customer relationship management.Tämä tutkielma tutkii virtuaaliassistenttien käyttöä käyttäjäorganisaation perspektiivistä, antaen
käsityksen mitä haasteita ja mahdollisuuksia liittyy virtuaaliassistenttien käyttöönottoon
organisaation asiakasrajapinnassa. Virtuaaliassistentteihin liittyvä tutkimus jakautuu monien eri
tutkimusalojen alaisuuteen ja useiden vuosikymmenien ajalle. Laajamittainen
virtuaaliassistenttien käyttö asiakasrajapinnassa on kuitenkin verrattain uusi ja jatkuvasti
kehittyvä ilmiö. Tieteellinen tutkimus joka liittyy virtuaaliassistenttien nykyiseen käyttöön ja
käyttäjäorganisaation huomioon otetaviin asioihin on puutteellista.
Tämä tutkielma käyttää kvalitatiivista, puolisystemaattista kirjallisuusanalyysimetodia
tutkiakseen virtuaaliassistentteihin liittyviä kehityskulkuja, tarkoituksena tunnistaa merkittäviä
trendejä. Tutkimus kattaa useita tutkimusaloja jotka käsittelevät virtuaaliassistentteja eri
näkökulmista, keskittyen pääasiassa Human-Computer Interaction- sekä Natural Language
Processing -tutkimusaloihin. Lisäksi tutkielmassa on tapaustutkimus suomalaisen
vakuutusyhtiön virtuaaliassistenttien käytöstä, joka tukee kirjallisuusanalyysiä ja auttaa
ymmärtämään käyttäjäorganisaation perspektiiviä. Tutkielma kuvailee kuinka keskeiset
teknologiat ovat kehittyneet, auttaa ymmärtämään tämänhetkisiä ongelmia jotka koskettavat
organisaatioita sekä esittelee virtuaaliassistentteihin liittyviä mahdollisuuksia tulevaisuudessa.
Tapaustutkimukseen liittyvät haastattelut antavat rajoitetun kuvan kyseisen uuden teknologian
käyttöön liittyvistä haasteista vakuutusalalla.
Tapaustutkimus ja kirjallisuusanalyysi osoittavat että virtuaaliassistenttien käyttöönottoon liittyy
erilaisia käytännön haasteita. Jotkut haasteet vaikuttavat olevan toimialakohtaisia, liittyen
esimerkiksi vakuutustuotteita koskeviin neuvoihin. Toiset haasteet taas ovat yleisempiä, liittyen
esimerkiksi asiakaspalautteen epäluotettavuuteen. Eri asiakassegmenteillä on erilaisia asenteita
virtuaaliassistentteja kohtaan, vaihdellen positiivisesta negatiiviseen, joten kyseinen teknologia
on selvästi polarisoiva aihe. Pitkällä aikavälillä asiakkaiden asenteet teknologiaa kohtaan
vaikuttavat kuitenkin muuttuvan hyväksyvämpään suuntaan. Lisää tutkimusta tarvitaan jotta
voidaan ymmärtää virtuaaliassistenttien tulevaisuuden potentiaalia asiakaskohtaamisissa ja
asiakkuudenhallinnassa
Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information Disclosure in Human-Chatbot Interaction
Self-disclosure counts as a key factor influencing successful health
treatment, particularly when it comes to building a functioning
patient-therapist-connection. To this end, the use of chatbots may be
considered a promising puzzle piece that helps foster respective information
provision. Several studies have shown that people disclose more information
when they are interacting with a chatbot than when they are interacting with
another human being. If and how the chatbot is embodied, however, seems to play
an important role influencing the extent to which information is disclosed.
Here, research shows that people disclose less if the chatbot is embodied with
a human avatar in comparison to a chatbot without embodiment. Still, there is
only little information available as to whether it is the embodiment with a
human face that inhibits disclosure, or whether any type of face will reduce
the amount of shared information. The study presented in this paper thus aims
to investigate how the type of chatbot embodiment influences self-disclosure in
human-chatbot-interaction. We conducted a quasi-experimental study in which
participants were asked to interact with one of three settings of a
chatbot app. In each setting, the humanness of the chatbot embodiment was
different (i.e., human vs. robot vs. disembodied). A subsequent discourse
analysis explored difference in the breadth and depth of self-disclosure.
Results show that non-human embodiment seems to have little effect on
self-disclosure. Yet, our data also shows, that, contradicting to previous
work, human embodiment may have a positive effect on the breadth and depth of
self-disclosure.Comment: 13 page
- …