3,890 research outputs found

    Understanding Collaboration with Virtual Assistantsโ€”The Role of Social Identity and the Extended Self

    Get PDF
    Organizations introduce virtual assistants (VAs) to support employees with work-related tasks. VAs can increase the success of teamwork and thus become an integral part of the daily work life. However, the effect of VAs on virtual teams remains unclear. While social identity theory describes the identification of employees with team members and the continued existence of a group identity, the concept of the extended self refers to the incorporation of possessions into oneโ€™s sense of self. This raises the question of which approach applies to VAs as teammates. The article extends the IS literature by examining the impact of VAs on individuals and teams and updates the knowledge on social identity and the extended self by deploying VAs in a collaborative setting. Using a laboratory experiment with N = 50, two groups were compared in solving a task, where one group was assisted by a VA, while the other was supported by a person. Results highlight that employees who identify VAs as part of their extended self are more likely to identify with team members and vice versa. The two aspects are thus combined into the proposed construct of virtually extended identification explaining the relationships of collaboration with VAs. This study contributes to the understanding on the influence of the extended self and social identity on collaboration with VAs. Practitioners are able to assess how VAs improve collaboration and teamwork in mixed teams in organizations

    ์ •์‹ ๊ฑด๊ฐ•์—์„œ ์‚ฌ์šฉ์ž ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ์™€ ์ž์•„์„ฑ์ฐฐ์„ ์ง€์›ํ•˜๋Š” ๋Œ€ํ™”ํ˜• ์—์ด์ „ํŠธ ๋””์ž์ธ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์œตํ•ฉ๊ณผํ•™๊ธฐ์ˆ ๋Œ€ํ•™์› ์œตํ•ฉ๊ณผํ•™๋ถ€(๋””์ง€ํ„ธ์ •๋ณด์œตํ•ฉ์ „๊ณต), 2020. 8. ์„œ๋ด‰์›.In the advent of artificial intelligence (AI), we are surrounded by technological gadgets, devices and intelligent personal assistant (IPAs) that voluntarily take care of our home, work and social networks. They help us manage our life for the better, or at least that is what they are designed for. As a matter of fact, few are, however, designed to help us grapple with the thoughts and feelings that often construct our living. In other words, technologies hardly help us think. How can they be designed to help us reflect on ourselves for the better? In the simplest terms, self-reflection refers to thinking deeply about oneself. When we think deeply about ourselves, there can be both positive and negative consequences. On the one hand, reflecting on ourselves can lead to a better self-understanding, helping us achieve life goals. On the other hand, we may fall into brooding and depression. The sad news is that the two are usually intertwined. The problem, then, is the irony that reflecting on oneself by oneself is not easy. To tackle this problem, this work aims to design technology in the form of a conversational agent, or a chatbot, to encourage a positive self-reflection. Chatbots are natural language interfaces that interact with users in text. They work at the tip of our hands as if SMS or instant messaging, from flight reservation and online shopping to news service and healthcare. There are even chatbot therapists offering psychotherapy on mobile. That machines can now talk to us creates an opportunity for designing a natural interaction that used to be humans own. This work constructs a two-dimensional design space for translating self-reflection into a human-chatbot interaction, with user self-disclosure and chatbot guidance. Users confess their thoughts and feelings to the bot, and the bot is to guide them in the scaffolding process. Previous work has established an extensive line of research on the therapeutic effect of emotional disclosure. In HCI, reflection design has posited the need for guidance, e.g. scaffolding users thoughts, rather than assuming their ability to reflect in a constructive manner. The design space illustrates different reflection processes depending on the levels of user disclosure and bot guidance. Existing reflection technologies have most commonly provided minimal levels of disclosure and guidance, and healthcare technologies the opposite. It is the aim of this work to investigate the less explored space by designing chatbots called Bonobot and Diarybot. Bonobot differentiates itself from other bot interventions in that it only motivates the idea of change rather than direct engagement. Diarybot is designed in two chat versions, Basic and Responsive, which create novel interactions for reflecting on a difficult life experience by explaining it to and exploring it with a chatbot. These chatbots are set up for a user study with 30 participants, to investigate the user experiences of and responses to design strategies. Based on the findings, challenges and opportunities from designing for chatbot-guided reflection are explored. The findings of this study are as follows. First, participants preferred Bonobots questions that prompted the idea of change. Its responses were also appreciated, but only when they conveyed accurate empathy. Thus questions, coupled with empathetic responses, could serve as a catalyst for disclosure and even a possible change of behavior, a motivational boost. Yet the chatbot-led interaction led to surged user expectations for the bot. Participants demanded more than just the guidance, such as solutions and even superhuman intelligence. Potential tradeoff between user engagement and autonomy in designing human-AI partnership is discussed. Unlike Bonobot, Diarybot was designed with less guidance to encourage users own narrative making. In both Diarybot chats, the presence of a bot could make it easier for participants to share the most difficult life experiences, compared to a no-chatbot writing condition. Yet an increased interaction with the bot in Responsive chat could lead to a better user engagement. On the contrary, more emotional expressiveness and ease of writing were observed with little interaction in Basic chat. Coupled with qualitative findings that reveal user preference for varied interactions and tendency to adapt to bot patterns, predictability and transparency of designing chatbot interaction are discussed in terms of managing user expectations in human-AI interaction. In sum, the findings of this study shed light on designing human-AI interaction. Chatbots can be a potential means of supporting guided disclosure on lifes most difficult experiences. Yet the interaction between a machine algorithm and an innate human cognition bears interesting questions for the HCI community, especially in terms of user autonomy, interface predictability, and design transparency. Discussing the notion of algorithmic affordances in AI agents, this work proposes meaning-making as novel interaction design metaphor: In the symbolic interaction via language, AI nudges users, which inspires and engages users in their pursuit of making sense of lifes agony. Not only does this metaphor respect user autonomy but also it maintains the veiled workings of AI from users for continued engagement. This work makes the following contributions. First, it designed and implemented chatbots that can provide guidance to encourage user narratives in self-reflection. Next, it offers empirical evidence on chatbot-guided disclosure and discusses implications for tensions and challenges in design. Finally, this work proposes meaning-making as a novel design metaphor. It calls for the responsible design of intelligent interfaces for positive reflection in pursuit of psychological wellbeing, highlighting algorithmic affordances and interpretive process of human-AI interaction.์ตœ๊ทผ ์ธ๊ณต์ง€๋Šฅ(Artificial Intelligence; AI) ๊ธฐ์ˆ ์€ ์šฐ๋ฆฌ ์‚ถ์˜ ๋ฉด๋ฉด์„ ๋งค์šฐ ๋น ๋ฅด๊ฒŒ ๋ฐ”๊ฟ”๋†“๊ณ  ์žˆ๋‹ค. ํŠนํžˆ ์• ํ”Œ์˜ ์‹œ๋ฆฌ(Siri)์™€ ๊ตฌ๊ธ€ ์–ด์‹œ์Šคํ„ดํŠธ (Google Assistant) ๋“ฑ ์ž์—ฐ์–ด ์ธํ„ฐํŽ˜์ด์Šค(natural language interfaces)์˜ ํ™•์žฅ์€ ๊ณง ์ธ๊ณต์ง€๋Šฅ ์—์ด์ „ํŠธ์™€์˜ ๋Œ€ํ™”๊ฐ€ ์ธํ„ฐ๋ž™์…˜์˜ ์ฃผ์š” ์ˆ˜๋‹จ์ด ๋  ๊ฒƒ์ž„์„ ๋Šฅํžˆ ์ง์ž‘์ผ€ ํ•œ๋‹ค. ์‹ค์ƒ ์ธ๊ณต์ง€๋Šฅ ์—์ด์ „ํŠธ๋Š” ์‹ค์ƒํ™œ์—์„œ ์ฝ˜ํ…์ธ  ์ถ”์ฒœ๊ณผ ์˜จ๋ผ์ธ ์‡ผํ•‘ ๋“ฑ ๋‹ค์–‘ํ•œ ์„œ๋น„์Šค๋ฅผ ์ œ๊ณตํ•˜๊ณ  ์žˆ์ง€๋งŒ, ์ด๋“ค์˜ ๋Œ€๋ถ€๋ถ„์€ ๊ณผ์—…-์ง€ํ–ฅ์ ์ด๋‹ค. ์ฆ‰ ์ธ๊ณต์ง€๋Šฅ์€ ์šฐ๋ฆฌ์˜ ์‚ถ์„ ํŽธ๋ฆฌํ•˜๊ฒŒ ํ•˜์ง€๋งŒ, ๊ณผ์—ฐ ํŽธ์•ˆํ•˜๊ฒŒ ํ•  ์ˆ˜ ์žˆ๋Š”๊ฐ€? ๋ณธ ์—ฐ๊ตฌ๋Š” ํŽธํ•˜์ง€๋งŒ ํŽธํ•˜์ง€ ์•Š์€ ํ˜„๋Œ€์ธ์„ ์œ„ํ•œ ๊ธฐ์ˆ ์˜ ์—ญํ• ์„ ๊ณ ๋ฏผํ•˜๋Š” ๋ฐ์—์„œ ์ถœ๋ฐœํ•œ๋‹ค. ์ž์•„์„ฑ์ฐฐ(self-reflection), ์ฆ‰ ์ž์‹ ์— ๋Œ€ํ•ด ๊นŠ์ด ์ƒ๊ฐํ•ด ๋ณด๋Š” ํ™œ๋™์€ ์ž๊ธฐ์ธ์‹๊ณผ ์ž๊ธฐ์ดํ•ด๋ฅผ ๋„๋ชจํ•˜๊ณ  ๋ฐฐ์›€๊ณผ ๋ชฉํ‘œ์˜์‹์„ ๊ณ ์ทจํ•˜๋Š” ๋“ฑ ๋ถ„์•ผ๋ฅผ ๋ง‰๋ก ํ•˜๊ณ  ๋„๋ฆฌ ์—ฐ๊ตฌ ๋ฐ ์ ์šฉ๋˜์–ด ์™”๋‹ค. ํ•˜์ง€๋งŒ ์ž์•„์„ฑ์ฐฐ์˜ ๊ฐ€์žฅ ํฐ ์–ด๋ ค์›€์€ ์Šค์Šค๋กœ ๊ฑด์„ค์ ์ธ ์„ฑ์ฐฐ์„ ๋„๋ชจํ•˜๊ธฐ ํž˜๋“ค๋‹ค๋Š” ๊ฒƒ์ด๋‹ค. ํŠนํžˆ, ๋ถ€์ •์ ์ธ ๊ฐ์ •์  ๊ฒฝํ—˜์— ๋Œ€ํ•œ ์ž์•„์„ฑ์ฐฐ์€ ์ข…์ข… ์šฐ์šธ๊ฐ๊ณผ ๋ถˆ์•ˆ์„ ๋™๋ฐ˜ํ•œ๋‹ค. ๊ทน๋ณต์ด ํž˜๋“  ๊ฒฝ์šฐ ์ƒ๋‹ด ๋˜๋Š” ์น˜๋ฃŒ๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์ง€๋งŒ, ์‚ฌํšŒ์  ๋‚™์ธ๊ณผ ์žฃ๋Œ€์˜ ๋ถ€๋‹ด๊ฐ์œผ๋กœ ๊บผ๋ ค์ง€๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋‹ค์ˆ˜์ด๋‹ค. ์„ฑ์ฐฐ ๋””์ž์ธ(Reflection Design)์€ ์ธ๊ฐ„-์ปดํ“จํ„ฐ์ƒํ˜ธ์ž‘์šฉ(HCI)์˜ ์˜ค๋žœ ํ™”๋‘๋กœ, ๊ทธ๋™์•ˆ ํšจ๊ณผ์ ์ธ ์„ฑ์ฐฐ์„ ๋„์šธ ์ˆ˜ ์žˆ๋Š” ๋””์ž์ธ ์ „๋žต๋“ค์ด ๋‹ค์ˆ˜ ์—ฐ๊ตฌ๋˜์–ด ์™”์ง€๋งŒ ๋Œ€๋ถ€๋ถ„ ๋‹ค์–‘ํ•œ ์‚ฌ์šฉ์ž ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘ ์ „๋žต์„ ํ†ตํ•ด ๊ณผ๊ฑฐ ํšŒ์ƒ ๋ฐ ํ•ด์„์„ ๋•๋Š” ๋ฐ ๊ทธ์ณค๋‹ค. ์ตœ๊ทผ ์†Œ์œ„ ์ฑ—๋ด‡ ์ƒ๋‹ด์‚ฌ๊ฐ€ ๋“ฑ์žฅํ•˜์—ฌ ์‹ฌ๋ฆฌ์ƒ๋‹ด๊ณผ ์น˜๋ฃŒ ๋ถ„์•ผ์— ์ ์šฉ๋˜๊ณ  ์žˆ์ง€๋งŒ, ์ด ๋˜ํ•œ ์„ฑ์ฐฐ์„ ๋•๊ธฐ๋ณด๋‹ค๋Š” ํšจ์œจ์ ์ธ ์ฒ˜์น˜ ๋„๊ตฌ์— ๋จธ๋ฌด๋ฅด๊ณ  ์žˆ์„ ๋ฟ์ด๋‹ค. ์ฆ‰ ๊ธฐ์ˆ ์€ ์น˜๋ฃŒ ์ˆ˜๋‹จ์ด๊ฑฐ๋‚˜ ์„ฑ์ฐฐ์˜ ๋Œ€์ƒ์ด ๋˜์ง€๋งŒ, ๊ทธ ๊ณผ์ •์— ๊ฐœ์ž…ํ•˜๋Š” ๊ฒฝ์šฐ๋Š” ์ œํ•œ์ ์ด๋ผ๊ณ  ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด์— ๋ณธ ์—ฐ๊ตฌ๋Š” ์„ฑ์ฐฐ ๋™๋ฐ˜์ž๋กœ์„œ ๋Œ€ํ™”ํ˜• ์—์ด์ „ํŠธ์ธ ์ฑ—๋ด‡์„ ๋””์ž์ธํ•  ๊ฒƒ์„ ์ œ์•ˆํ•œ๋‹ค. ์ด ์ฑ—๋ด‡์˜ ์—ญํ• ์€ ์‚ฌ์šฉ์ž์˜ ๋ถ€์ •์ ์ธ ๊ฐ์ •์  ๊ฒฝํ—˜ ๋˜๋Š” ํŠธ๋ผ์šฐ๋งˆ์— ๋Œ€ํ•ด ์ด์•ผ๊ธฐํ•  ์ˆ˜ ์žˆ๋„๋ก ๋„์šธ ๋ฟ ์•„๋‹ˆ๋ผ, ๊ทธ ๊ณผ์ •์—์„œ ๋ฐ˜์ถ”๋ฅผ ํ†ต์ œํ•˜์—ฌ ๊ฑด์„ค์ ์ธ ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ๋ฅผ ์ด๋Œ์–ด ๋‚ด๋Š” ๊ฐ€์ด๋“œ๋ฅผ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ์ด๋Ÿฌํ•œ ์ฑ—๋ด‡์„ ์„ค๊ณ„ํ•˜๊ธฐ ์œ„ํ•ด, ์„ ํ–‰ ์—ฐ๊ตฌ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์‚ฌ์šฉ์ž์˜ ์ž๊ธฐ๋…ธ์ถœ(user self-disclosure)๊ณผ ์ฑ—๋ด‡ ๊ฐ€์ด๋“œ(guidance)๋ฅผ ๋‘ ์ถ•์œผ๋กœ ํ•œ ๋””์ž์ธ ๊ณต๊ฐ„(design space)์„ ์ •์˜ํ•˜์˜€๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ž๊ธฐ๋…ธ์ถœ๊ณผ ๊ฐ€์ด๋“œ์˜ ์ •๋„์— ๋”ฐ๋ฅธ ๋„ค ๊ฐ€์ง€ ์ž์•„์„ฑ์ฐฐ ๊ฒฝํ—˜์„ ๋ถ„๋ฅ˜ํ•˜์˜€๋‹ค: ์ž๊ธฐ๋…ธ์ถœ๊ณผ ๊ฐ€์ด๋“œ๊ฐ€ ์ตœ์†Œํ™”๋œ ํšŒ์ƒ ๊ณต๊ฐ„, ์ž๊ธฐ๋…ธ์ถœ์ด ์œ„์ฃผ์ด๊ณ  ๊ฐ€์ด๋“œ๊ฐ€ ์ตœ์†Œํ™”๋œ ์„ค๋ช… ๊ณต๊ฐ„, ์ž๊ธฐ๋…ธ์ถœ๊ณผ ์ฑ—๋ด‡์ด ์ด๋„๋Š” ๊ฐ€์ด๋“œ๊ฐ€ ํ˜ผํ•ฉ๋œ ํƒ์ƒ‰ ๊ณต๊ฐ„, ๊ฐ€์ด๋“œ๋ฅผ ์ ๊ทน ๊ฐœ์ž…์‹œ์ผœ ์ž๊ธฐ๋…ธ์ถœ์„ ๋†’์ด๋Š” ๋ณ€ํ™” ๊ณต๊ฐ„์ด ๊ทธ๊ฒƒ์ด๋‹ค. ๋ณธ ์—ฐ๊ตฌ์˜ ๋ชฉํ‘œ๋Š” ์ƒ์ˆ ๋œ ๋””์ž์ธ ๊ณต๊ฐ„์—์„œ์˜ ์„ฑ์ฐฐ ๊ฒฝํ—˜๊ณผ ๊ณผ์ •์„ ๋•๋Š” ์ฑ—๋ด‡์„ ๊ตฌํ˜„ํ•˜๊ณ , ์‚ฌ์šฉ์ž ์‹คํ—˜์„ ํ†ตํ•ด ์„ฑ์ฐฐ ๊ฒฝํ—˜๊ณผ ๋””์ž์ธ ์ „๋žต์— ๋Œ€ํ•œ ๋ฐ˜์‘์„ ์ˆ˜์ง‘ ๋ฐ ๋ถ„์„ํ•จ์œผ๋กœ์จ ์ฑ—๋ด‡ ๊ธฐ๋ฐ˜์˜ ์ž์•„ ์„ฑ์ฐฐ ์ธํ„ฐ๋ž™์…˜์„ ์ƒˆ๋กญ๊ฒŒ ์ œ์‹œํ•˜๊ณ  ์ด์— ๋Œ€ํ•œ ์‹ค์ฆ์  ๊ทผ๊ฑฐ๋ฅผ ๋งˆ๋ จํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ํ˜„์žฌ๊นŒ์ง€ ๋งŽ์€ ์„ฑ์ฐฐ ๊ธฐ์ˆ ์€ ํšŒ์ƒ์— ์ง‘์ค‘๋˜์–ด ์žˆ๊ธฐ์—, ๋‚˜๋จธ์ง€ ์„ธ ๊ณต๊ฐ„์—์„œ์˜ ์„ฑ์ฐฐ์„ ์ง€์›ํ•˜๋Š” ๋ณด๋…ธ๋ด‡๊ณผ ๊ธฐ๋ณธํ˜•๋ฐ˜์‘ํ˜• ์ผ๊ธฐ๋ด‡์„ ๋””์ž์ธํ•˜์˜€๋‹ค. ๋˜ํ•œ, ์‚ฌ์šฉ์ž ํ‰๊ฐ€๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ๋„์ถœํ•œ ์—ฐ๊ตฌ๊ฒฐ๊ณผ๋ฅผ ํ†ตํ•ด ๋„๋ž˜ํ•œ ์ธ๊ฐ„-์ธ๊ณต์ง€๋Šฅ ์ƒํ˜ธ์ž‘์šฉ(human-AI interaction)์˜ ๋งฅ๋ฝ์—์„œ ์„ฑ์ฐฐ ๋™๋ฐ˜์ž๋กœ์„œ์˜ ์ฑ—๋ด‡ ๊ธฐ์ˆ ์ด ๊ฐ–๋Š” ์˜๋ฏธ์™€ ์—ญํ• ์„ ํƒ๊ตฌํ•œ๋‹ค. ๋ณด๋…ธ๋ด‡๊ณผ ์ผ๊ธฐ๋ด‡์€ ์ธ๊ฐ„์ค‘์‹ฌ์ƒ๋‹ด๊ณผ ๋Œ€ํ™”๋ถ„์„์˜ ์ด๋ก ์  ๊ทผ๊ฑฐ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ํ•œ ์ •์„œ์ง€๋Šฅ(emotional intelligence)๊ณผ ์ ˆ์ฐจ์ง€๋Šฅ(proecedural intelligence)์„ ํ•ต์‹ฌ ์ถ•์œผ๋กœ, ๋Œ€ํ™” ํ๋ฆ„ ์ œ์–ด(flow manager)์™€ ๋ฐœํ™” ์ƒ์„ฑ(response generator)์„ ํ•ต์‹ฌ ๋ชจ๋“ˆ๋กœ ๊ตฌํ˜„ํ•˜์˜€๋‹ค. ๋จผ์ €, ๋ณด๋…ธ๋ด‡์€ ๋™๊ธฐ๊ฐ•ํ™”์ƒ๋‹ด(motivational interviewing)์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ณ ๋ฏผ๊ณผ ์ŠคํŠธ๋ ˆ์Šค์— ๋Œ€ํ•œ ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ๋ฅผ ์ด๋Œ์–ด๋‚ด์–ด, ์ด์— ๋Œ€ํ•œ ํ•ด๊ฒฐ์„ ์œ„ํ•œ ๊ฐ€์ด๋“œ ์งˆ๋ฌธ์„ ํ†ตํ•ด ๋ณ€ํ™”๋ฅผ ์œ„ํ•œ ์„ฑ์ฐฐ์„ ๋•๋Š”๋‹ค. ์ฑ—๋ด‡์˜ ๊ตฌํ˜„์„ ์œ„ํ•ด, ๋™๊ธฐ๊ฐ•ํ™”์ƒ๋‹ด์˜ ๋„ค ๋‹จ๊ณ„ ๋Œ€ํ™”๋ฅผ ์„ค์ •ํ•˜๊ณ  ๊ฐ ๋‹จ๊ณ„๋ฅผ ๊ตฌ์„ฑํ•  ์ˆ˜ ์žˆ๋Š” ์ƒ๋‹ด์‚ฌ ๋ฐœํ™” ํ–‰๋™์„ ๊ด€๋ จ๋ฌธํ—Œ์—์„œ ์ˆ˜์ง‘ ๋ฐ ์ „์ฒ˜๋ฆฌ ๊ณผ์ •์„ ๊ฑฐ์ณ ์Šคํฌ๋ฆฝํŠธํ™”ํ•˜์˜€๋‹ค. ๋˜ํ•œ, ์‚ฌ์ „ ์ „์ฒ˜๋ฆฌ๋œ ๋ฌธ์žฅ์ด ๋งฅ๋ฝ์„ ์œ ์ง€ํ•  ์ˆ˜ ์žˆ๋Š” ๋Œ€ํ™”์— ์“ฐ์ผ ์ˆ˜ ์žˆ๋„๋ก, ๋Œ€ํ™”์˜ ์ฃผ์ œ๋Š” ๋Œ€ํ•™์›์ƒ์˜ ์–ด๋ ค์›€์œผ๋กœ ํ•œ์ •ํ•˜์˜€๋‹ค. ๋ณด๋…ธ๋ด‡๊ณผ์˜ ๋Œ€ํ™”๊ฐ€ ์‚ฌ์šฉ์ž์˜ ์„ฑ์ฐฐ์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ๊ณผ ์ด์— ๋Œ€ํ•œ ์ธ์‹์„ ํƒ์ƒ‰ํ•˜๊ธฐ ์œ„ํ•ด ์งˆ์  ์—ฐ๊ตฌ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ 30๋ช…์˜ ๋Œ€ํ•™์›์ƒ๊ณผ ์‚ฌ์šฉ์ž ์‹คํ—˜์„ ์ง„ํ–‰ํ•˜์˜€๋‹ค. ์‹คํ—˜๊ฒฐ๊ณผ, ์‚ฌ์šฉ์ž๋Š” ๋ณ€ํ™” ๋Œ€ํ™”๋ฅผ ์œ ๋„ํ•  ์ˆ˜ ์žˆ๋Š” ๋‹ค์–‘ํ•œ ํƒ์ƒ‰ ์งˆ๋ฌธ์„ ์„ ํ˜ธํ•˜์˜€๋‹ค. ๋˜ํ•œ, ์‚ฌ์šฉ์ž์˜ ๋งฅ๋ฝ์— ์ •ํ™•ํžˆ ๋“ค์–ด๋งž๋Š” ์งˆ๋ฌธ๊ณผ ํ”ผ๋“œ๋ฐฑ์€ ์‚ฌ์šฉ์ž๋ฅผ ๋”์šฑ ์ ๊ทน์ ์ธ ์ž๊ธฐ ๋…ธ์ถœ๋กœ ์ด๋Œ๊ฒŒ ํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ฐœ๊ฒฌํ•˜์˜€๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ฑ—๋ด‡์ด ๋งˆ์น˜ ์ƒ๋‹ด์‚ฌ์ฒ˜๋Ÿผ ๋Œ€ํ™”๋ฅผ ์ด๋Œ์–ด๊ฐˆ ๊ฒฝ์šฐ, ๋†’์•„์ง„ ์‚ฌ์šฉ์ž์˜ ๊ธฐ๋Œ€ ์ˆ˜์ค€์œผ๋กœ ์ธํ•ด ์ผ๋ถ€ ์‚ฌ์šฉ์ž๊ฐ€ ๋ณ€ํ™”์— ๋Œ€ํ•œ ๋™๊ธฐ๋ฅผ ํ‘œ์ถœํ•˜์˜€์Œ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ  ๋ณ€ํ™”์— ๋Œ€ํ•œ ์ž์œจ์„ฑ์„ ์ฑ—๋ด‡์— ์–‘๋„ํ•˜๋ ค๋Š” ๋ชจ์Šต ๋˜ํ•œ ๋‚˜ํƒ€๋‚จ์„ ๋ถ„์„ํ•˜์˜€๋‹ค. ๋ณด๋…ธ๋ด‡ ์—ฐ๊ตฌ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ์ผ๊ธฐ๋ด‡์€ ์ฑ—๋ด‡ ๋Œ€์‹  ์‚ฌ์šฉ์ž๊ฐ€ ๋ณด๋‹ค ์ ๊ทน์ ์œผ๋กœ ์„ฑ์ฐฐ ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ๋ฅผ ์ „๊ฐœํ•  ์ˆ˜ ์žˆ๋„๋ก ๋””์ž์ธํ•˜์˜€๋‹ค. ์ผ๊ธฐ๋ด‡์€ ํŠธ๋ผ์šฐ๋งˆ์— ๋Œ€ํ•œ ํ‘œํ˜„์  ๊ธ€์“ฐ๊ธฐ๋ฅผ ์ง€์›ํ•˜๋Š” ์ฑ—๋ด‡์œผ๋กœ, ๊ธฐ๋ณธํ˜• ๋˜๋Š” ๋ฐ˜์‘ํ˜• ๋Œ€ํ™”๋ฅผ ์ œ๊ณตํ•œ๋‹ค. ๊ธฐ๋ณธํ˜• ๋Œ€ํ™”๋Š” ํŠธ๋ผ์šฐ๋งˆ์— ๋Œ€ํ•ด ์ž์œ ๋กญ๊ฒŒ ์„ค๋ช…ํ•  ์ˆ˜ ์žˆ๋Š” ๋Œ€ํ™” ํ™˜๊ฒฝ์„ ์ œ๊ณตํ•˜๊ณ , ๋ฐ˜์‘ํ˜• ๋Œ€ํ™”๋Š” ์‚ฌ์šฉ์ž๊ฐ€ ์ž‘์„ฑํ•œ ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ์— ๋Œ€ํ•œ ํ›„์† ์ธํ„ฐ๋ž™์…˜์„ ํ†ตํ•ด ๊ณผ๊ฑฐ์˜ ๊ฒฝํ—˜์„ ์žฌํƒ์ƒ‰ํ•˜๋„๋ก ํ•˜์˜€๋‹ค. ๋˜ํ•œ, ํ›„์† ์ธํ„ฐ๋ž™์…˜์˜ ๋ฐœํ™” ํ–‰๋™์€ ๋‹ค์–‘ํ•œ ์ƒ๋‹ด์น˜๋ฃŒ์—์„œ ๋ฐœ์ทŒํ•˜๋˜ ์œ ์ €์˜ ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ์—์„œ ์ถ”์ถœํ•œ ๊ฐ์ •์–ด ๋ฐ ์ธ๊ฐ„๊ด€๊ณ„ ํ‚ค์›Œ๋“œ๋ฅผ ํ™œ์šฉํ•˜๋„๋ก ํ•˜์˜€๋‹ค. ๊ฐ ์ผ๊ธฐ๋ด‡์— ๋Œ€ํ•œ ๋ฐ˜์‘์„ ๋น„๊ต ๋ถ„์„ํ•˜๊ธฐ ์œ„ํ•ด, ์ฑ—๋ด‡ ์—†์ด ๋„ํ๋จผํŠธ์— ํ‘œํ˜„์  ๊ธ€์“ฐ๊ธฐ ํ™œ๋™๋งŒ์„ ํ•˜๋Š” ๋Œ€์กฐ๊ตฐ์„ ์„ค์ •ํ•˜๊ณ  30๋ช…์˜ ์‚ฌ์šฉ์ž๋ฅผ ๋ชจ์ง‘ํ•˜์—ฌ ๊ฐ ์กฐ๊ฑด์— ๋žœ๋ค์œผ๋กœ ๋ฐฐ์ •, ์„ค๋ฌธ๊ณผ ๋ฉด๋‹ด์„ ๋™๋ฐ˜ํ•œ 4์ผ๊ฐ„์˜ ๊ธ€์“ฐ๊ธฐ ์‹คํ—˜์„ ์ง„ํ–‰ํ•˜์˜€๋‹ค. ์‹คํ—˜๊ฒฐ๊ณผ, ์‚ฌ์šฉ์ž๋Š” ์ผ๊ธฐ๋ด‡๊ณผ์˜ ์ธํ„ฐ๋ž™์…˜์„ ํ†ตํ•ด ๋ณด์ด์ง€ ์•Š๋Š” ๊ฐ€์ƒ์˜ ์ฒญ์ž๋ฅผ ์ƒ์ƒํ•จ์œผ๋กœ์จ ๊ธ€์“ฐ๊ธฐ๋ฅผ ๋Œ€ํ™” ํ™œ๋™์œผ๋กœ ์ธ์ง€ํ•˜๊ณ  ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ํŠนํžˆ, ๋ฐ˜์‘ํ˜• ๋Œ€ํ™”์˜ ํ›„์† ์งˆ๋ฌธ๋“ค์€ ์‚ฌ์šฉ์ž๋กœ ํ•˜์—ฌ๊ธˆ ์ƒํ™ฉ์„ ๊ฐ๊ด€ํ™”ํ•˜๊ณ  ์ƒˆ๋กœ์šด ๊ด€์ ์œผ๋กœ ์ƒ๊ฐํ•ด ๋ณผ ์ˆ˜ ์žˆ๋Š” ํšจ๊ณผ๋ฅผ ๊ฑฐ๋‘์—ˆ๋‹ค. ๋ฐ˜์‘ํ˜• ๋Œ€ํ™”์—์„œ ํ›„์† ์ธํ„ฐ๋ž™์…˜์„ ๊ฒฝํ—˜ํ•œ ์‚ฌ์šฉ์ž๋Š” ์ผ๊ธฐ๋ด‡์˜ ์ธ์ง€๋œ ์ฆ๊ฑฐ์›€๊ณผ ์‚ฌํšŒ์„ฑ, ์‹ ๋ขฐ๋„์™€ ์žฌ์‚ฌ์šฉ ์˜ํ–ฅ์— ๋Œ€ํ•œ ํ‰๊ฐ€๊ฐ€ ๋‹ค๋ฅธ ๋‘ ์กฐ๊ฑด์—์„œ๋ณด๋‹ค ์œ ์˜ํ•˜๊ฒŒ ๋†’์•˜๋‹ค. ๋ฐ˜๋ฉด, ๊ธฐ๋ณธํ˜• ๋Œ€ํ™” ์ฐธ์—ฌ์ž๋Š” ๋‹ค๋ฅธ ๋‘ ์กฐ๊ฑด์—์„œ๋ณด๋‹ค ๊ฐ์ •์  ํ‘œํ˜„์˜ ์šฉ์ด์„ฑ๊ณผ ๊ธ€์“ฐ๊ธฐ์˜ ์–ด๋ ค์›€์„ ๊ฐ๊ฐ ์œ ์˜ํ•˜๊ฒŒ ๋†’๊ฒŒ, ๊ทธ๋ฆฌ๊ณ  ๋‚ฎ๊ฒŒ ํ‰๊ฐ€ํ•˜์˜€๋‹ค. ์ฆ‰, ์ฑ—๋ด‡์€ ๋งŽ์€ ์ธํ„ฐ๋ž™์…˜ ์—†์ด๋„ ์ฒญ์ž์˜ ์—ญํ• ์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์—ˆ์ง€๋งŒ, ํ›„์† ์งˆ๋ฌธ์„ ํ†ตํ•œ ์ธํ„ฐ๋ž™์…˜์ด ๊ฐ€๋Šฅํ–ˆ๋˜ ๋ฐ˜์‘ํ˜• ๋Œ€ํ™”๋Š” ๋”์šฑ ์ ๊ทน์ ์ธ ์œ ์ € ์ฐธ์—ฌ(engagement)๋ฅผ ์ด๋Œ์–ด๋‚ผ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋˜ํ•œ, ์‹คํ—˜์ด ์ง„ํ–‰๋จ์— ๋”ฐ๋ผ, ์‚ฌ์šฉ์ž๊ฐ€ ๋ฐ˜์‘ํ˜• ์ผ๊ธฐ๋ด‡์˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ์ž์‹ ์˜ ๊ธ€์“ฐ๊ธฐ ์ฃผ์ œ์™€ ๋‹จ์–ด ์„ ํƒ ๋“ฑ์„ ๋งž๊ฒŒ ๋ฐ”๊พธ์–ด ๊ฐ€๋Š” ์ ์‘์ (adaptive) ํ–‰๋™์ด ๊ด€์ฐฐ๋˜์—ˆ๋‹ค. ์•ž์„  ์—ฐ๊ตฌ๊ฒฐ๊ณผ๋ฅผ ํ†ตํ•ด, ๋‹ค์–‘ํ•œ ์ฑ—๋ด‡ ๋””์ž์ธ ์ „๋žต์„ ๋ฐ”ํƒ•์œผ๋กœ ์‚ฌ์šฉ์ž์˜ ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ๊ฐ€ ๋‹ค๋ฅด๊ฒŒ ์œ ๋„๋  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๋”ฐ๋ผ์„œ ์„œ๋กœ ๋‹ค๋ฅธ ์œ ํ˜•์˜ ์„ฑ์ฐฐ ๊ฒฝํ—˜์„ ์ด๋Œ์–ด๋‚ผ ์ˆ˜ ์žˆ์Œ์„ ๋ฐœ๊ฒฌํ•˜์˜€๋‹ค. ๋˜ํ•œ, ์ž์œจ์ ์ธ ํ–‰์œ„์ธ ์ž์•„์„ฑ์ฐฐ์ด ๊ธฐ์ˆ ๊ณผ์˜ ์ƒํ˜ธ์ž‘์šฉ์œผ๋กœ ํ˜ธํ˜œ์  ์„ฑ์งˆ์„ ๊ฐ–๊ฒŒ ๋  ๋•Œ ์‚ฌ์šฉ์ž์˜ ์ž์œจ์„ฑ, ์ƒํ˜ธ์ž‘์šฉ์˜ ์˜ˆ์ธก๊ฐ€๋Šฅ์„ฑ๊ณผ ๋””์ž์ธ ํˆฌ๋ช…์„ฑ์—์„œ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐˆ๋“ฑ๊ด€๊ณ„(tensions)๋ฅผ ํƒ์ƒ‰ํ•˜๊ณ  ์ธ๊ณต์ง€๋Šฅ ์—์ด์ „ํŠธ์˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜ ์–ดํฌ๋˜์Šค(algorithmic affordances)๋ฅผ ๋…ผ์˜ํ•˜์˜€๋‹ค. ๋ณด์ด์ง€ ์•Š๋Š” ์ฑ—๋ด‡ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ์˜ํ•ด ์‚ฌ์šฉ์ž์˜ ์„ฑ์ฐฐ์ด ์œ ๋„๋  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์€ ๊ธฐ์กด์˜ ์ธ๊ฐ„-์ปดํ“จํ„ฐ ์ƒํ˜ธ์ž‘์šฉ์—์„œ ๊ฐ•์กฐ๋˜๋Š” ์‚ฌ์šฉ์ž ์ œ์–ด์™€ ๋””์ž์ธ ํˆฌ๋ช…์„ฑ์—์„œ ์ „๋ณต์„ ์ดˆ๋ž˜ํ•˜๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ๋ณด์ผ ์ˆ˜ ์žˆ์œผ๋‚˜, ์ƒ์ง•์  ์ƒํ˜ธ์ž‘์šฉ(symbolic interaction)์˜ ๋งฅ๋ฝ์—์„œ ์˜คํžˆ๋ ค ์‚ฌ์šฉ์ž๊ฐ€ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ์˜ํ•ด ์ง€๋‚˜๊ฐ„ ๊ณผ๊ฑฐ์— ๋Œ€ํ•œ ์ƒˆ๋กœ์šด ์˜๋ฏธ๋ฅผ ์ ๊ทน ํƒ์ƒ‰ํ•ด๋‚˜๊ฐ€๋Š” ๊ณผ์ •์ด ๋  ์ˆ˜ ์žˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ์ด๊ฒƒ์„ ์ƒˆ๋กœ์šด ๋””์ž์ธ ๋ฉ”ํƒ€ํฌ, ์ฆ‰ ์˜๋ฏธ-๋งŒ๋“ค๊ธฐ(meaning-making)๋กœ ์ œ์•ˆํ•˜๊ณ  ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๋„›์ง€(nudge)์— ์˜ํ•œ ์‚ฌ์šฉ์ž์˜ ์ฃผ๊ด€์  ํ•ด์„ ๊ฒฝํ—˜(interpretive process)์„ ๊ฐ•์กฐํ•œ๋‹ค. ์ด๊ฒƒ์€ ํ•˜๋‚˜์˜ ์ฑ—๋ด‡ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด๋ผ ํ• ์ง€๋ผ๋„ ์„œ๋กœ ๋‹ค๋ฅธ ์‚ฌ์šฉ์ž์˜ ๋‹ค์–‘ํ•œ ์„ฑ์ฐฐ ๊ฒฝํ—˜์„ ์œ ๋„ํ•ด๋‚ผ ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•˜๋ฉฐ, ์ด๋Ÿฌํ•œ ๋งฅ๋ฝ์—์„œ ์ธ๊ณต์ง€๋Šฅ์€ ๊ธฐ์กด์˜ ๋ธ”๋ž™ ๋ฐ•์Šค๋ฅผ ์œ ์ง€ํ•˜๋ฉด์„œ๋„ ์‚ฌ์šฉ์ž์˜ ์ž์œจ์„ฑ์„ ๋ณด์žฅํ•  ์ˆ˜ ์žˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ์šฐ๋ฆฌ์™€ ํ˜‘์—…ํ•˜๋Š” ์ธ๊ณต์ง€๋Šฅ ์ฑ—๋ด‡ ๊ธฐ์ˆ ์˜ ๋””์ž์ธ์— ๋Œ€ํ•œ ๊ฒฝํ—˜์  ์ดํ•ด๋ฅผ ๋†’์ด๊ณ , ์ด๋ก ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ์ฑ—๋ด‡์„ ๊ตฌํ˜„ํ•จ์œผ๋กœ์จ ๋””์ž์ธ ์ „๋žต์— ๋Œ€ํ•œ ์‹ค์ฆ์  ๊ทผ๊ฑฐ๋ฅผ ์ œ์‹œํ•œ๋‹ค. ๋˜ํ•œ ์ž์•„ ์„ฑ์ฐฐ ๊ณผ์ •์— ๋™ํ–‰ํ•˜๋Š” ๋™๋ฐ˜์ž(companion)๋กœ์„œ์˜ ๊ธฐ์ˆ ๋กœ ์ƒˆ๋กœ์šด ๋””์ž์ธ ๋ฉ”ํƒ€ํฌ๋ฅผ ์ œ์‹œํ•จ์œผ๋กœ์จ ์ธ๊ฐ„์ปดํ“จํ„ฐ์ƒํ˜ธ์ž‘์šฉ(HCI)์˜ ์ด๋ก ์  ํ™•์žฅ์— ๊ธฐ์—ฌํ•˜๊ณ , ์‚ฌ์šฉ์ž์˜ ๋ถ€์ •์  ๊ฒฝํ—˜์— ๋Œ€ํ•œ ์˜๋ฏธ ์ถ”๊ตฌ๋ฅผ ๋•๋Š” ๊ด€๊ณ„์ง€ํ–ฅ์  ์ธ๊ณต์ง€๋Šฅ์œผ๋กœ์„œ ํ–ฅํ›„ ํ˜„๋Œ€์ธ์˜ ์ •์‹ ๊ฑด๊ฐ•์— ์ด๋ฐ”์ง€ํ•  ์ˆ˜ ์žˆ๋Š” ์‚ฌํšŒ์ , ์‚ฐ์—…์  ์˜์˜๋ฅผ ๊ฐ–๋Š”๋‹ค.CHAPTER 1. INTRODUCTION ๏ผ‘ 1.1. BACKGROUND AND MOTIVATION ๏ผ‘ 1.2. RESEARCH GOAL AND QUESTIONS ๏ผ• 1.2.1. Research Goal ๏ผ• 1.2.2. Research Questions ๏ผ• 1.3. MAJOR CONTRIBUTIONS ๏ผ˜ 1.4. THESIS OVERVIEW ๏ผ™ CHAPTER 2. LITERATURE REVIEW ๏ผ‘๏ผ‘ 2.1. THE REFLECTING SELF ๏ผ‘๏ผ‘ 2.1.1. Self-Reflection and Mental Wellbeing ๏ผ‘๏ผ‘ 2.1.2. The Self in Reflective Practice ๏ผ‘๏ผ• 2.1.3. Design Space ๏ผ’๏ผ’ 2.2. SELF-REFLECTION IN HCI ๏ผ’๏ผ– 2.2.1. Reflection Design in HCI ๏ผ’๏ผ– 2.2.2. HCI for Mental Wellbeing ๏ผ“๏ผ– 2.2.3. Design Opportunities ๏ผ”๏ผ 2.3. CONVERSATIONAL AGENT DESIGN ๏ผ”๏ผ’ 2.3.1. Theoretical Background ๏ผ”๏ผ’ 2.3.2. Technical Background ๏ผ”๏ผ— 2.3.3. Design Strategies ๏ผ”๏ผ™ 2.4. SUMMARY ๏ผ–๏ผ™ CHAPTER 3. DESIGNING CHATBOT FOR TRANSFORMATIVE REFLECTION ๏ผ—๏ผ‘ 3.1. DESIGN GOAL AND DECISIONS ๏ผ—๏ผ‘ 3.2. CHATBOT IMPLEMENTATION ๏ผ—๏ผ– 3.2.1. Emotional Intelligence ๏ผ—๏ผ– 3.2.2. Procedural Intelligence ๏ผ—๏ผ— 3.3. EXPERIMENTAL USER STUDY ๏ผ—๏ผ™ 3.3.1. Participants ๏ผ—๏ผ™ 3.3.2. Task ๏ผ˜๏ผ 3.3.3. Procedure ๏ผ˜๏ผ 3.3.4. Ethics Approval ๏ผ˜๏ผ 3.3.5. Surveys and Interview ๏ผ˜๏ผ‘ 3.4. RESULTS ๏ผ˜๏ผ’ 3.4.1. Survey Findings ๏ผ˜๏ผ’ 3.4.2. Qualitative Findings ๏ผ˜๏ผ“ 3.5. IMPLICATIONS ๏ผ˜๏ผ˜ 3.5.1. Articulating Hopes and Fears ๏ผ˜๏ผ™ 3.5.2. Designing for Guidance ๏ผ™๏ผ‘ 3.5.3. Rethinking Autonomy ๏ผ™๏ผ’ 3.6. SUMMARY ๏ผ™๏ผ” CHAPTER 4. DESIGNING CHATBOTS FOR EXPLAINING AND EXPLORING REFLECTIONS ๏ผ™๏ผ– 4.1. DESIGN GOAL AND DECISIONS ๏ผ™๏ผ– 4.1.1. Design Decisions for Basic Chat ๏ผ™๏ผ˜ 4.1.2. Design Decisions for Responsive Chat ๏ผ™๏ผ˜ 4.2. CHATBOT IMPLEMENTATION ๏ผ‘๏ผ๏ผ’ 4.2.1. Emotional Intelligence ๏ผ‘๏ผ๏ผ“ 4.2.2. Procedural Intelligence ๏ผ‘๏ผ๏ผ• 4.3. EXPERIMENTAL USER STUDY ๏ผ‘๏ผ๏ผ– 4.3.1. Participants ๏ผ‘๏ผ๏ผ– 4.3.2. Task ๏ผ‘๏ผ๏ผ— 4.3.3. Procedure ๏ผ‘๏ผ๏ผ— 4.3.4. Safeguarding of Study Participants and Ethics Approval ๏ผ‘๏ผ๏ผ˜ 4.3.5. Surveys and Interviews ๏ผ‘๏ผ๏ผ˜ 4.4. RESULTS ๏ผ‘๏ผ‘๏ผ‘ 4.4.1. Quantitative Findings ๏ผ‘๏ผ‘๏ผ‘ 4.4.2. Qualitative Findings ๏ผ‘๏ผ‘๏ผ˜ 4.5. IMPLICATIONS ๏ผ‘๏ผ’๏ผ— 4.5.1. Telling Stories to a Chatbot ๏ผ‘๏ผ’๏ผ˜ 4.5.2. Designing for Disclosure ๏ผ‘๏ผ“๏ผ 4.5.3. Rethinking Predictability and Transparency ๏ผ‘๏ผ“๏ผ’ 4.6. SUMMARY ๏ผ‘๏ผ“๏ผ“ CHAPTER 5. DESIGNING CHATBOTS FOR SELF-REFLECTION: SUPPORTING GUIDED DISCLOSURE ๏ผ‘๏ผ“๏ผ• 5.1. DESIGNING FOR GUIDED DISCLOSURE ๏ผ‘๏ผ“๏ผ™ 5.1.1. Chatbots as Virtual Confidante ๏ผ‘๏ผ“๏ผ™ 5.1.2. Routine and Variety in Interaction ๏ผ‘๏ผ”๏ผ‘ 5.1.3. Reflection as Continued Experience ๏ผ‘๏ผ”๏ผ” 5.2. TENSIONS IN DESIGN ๏ผ‘๏ผ”๏ผ• 5.2.1. Adaptivity ๏ผ‘๏ผ”๏ผ• 5.2.2. Autonomy ๏ผ‘๏ผ”๏ผ— 5.2.3. Algorithmic Affordance ๏ผ‘๏ผ”๏ผ˜ 5.3. MEANING-MAKING AS DESIGN METAPHOR ๏ผ‘๏ผ•๏ผ 5.3.1. Meaning in Reflection ๏ผ‘๏ผ•๏ผ‘ 5.3.2. Meaning-Making as Interaction ๏ผ‘๏ผ•๏ผ“ 5.3.3. Making Meanings with AI ๏ผ‘๏ผ•๏ผ• CHAPTER 6. CONCLUSION ๏ผ‘๏ผ•๏ผ˜ 6.1. RESEARCH SUMMARY ๏ผ‘๏ผ•๏ผ˜ 6.2. LIMITATIONS AND FUTURE WORK ๏ผ‘๏ผ–๏ผ‘ 6.3. FINAL REMARKS ๏ผ‘๏ผ–๏ผ“ BIBLIOGRAPHY ๏ผ‘๏ผ–๏ผ• ABSTRACT IN KOREAN ๏ผ‘๏ผ™๏ผ’Docto

    Do users need human-like conversational agents? - Exploring conversational system design using framework of human needs

    Get PDF
    The fascinating story of human evolution can be attributed to our ability to speak, write, and communicate complex thoughts. When researchers envision a perfect, artificially intelligent conversational system, they want the system to be human-like. In other words, the system should converse with the same intellect and cognition as humans. Now, the question which we need to ask is if we need a human-like conversational system? Before we engage in the complex endeavor of implementing human-like characteristics, we should debate if the pursuit of such a system is logical and ethical. We analyze some of the system-level characteristics and discuss their merits and potential of harm. We review some of the latest work on conversational systems to understand how design features are evolving for Conversational Agents. Additionally, we look into the framework of human needs to assess how the system should assign relative importance to user requests, and prioritize user tasks. We draw on the peer work in human-computer interaction, sentiment analysis, and human psychology to provide insights into how future conversational agents should be designed for better user satisfaction

    AI-based Technologies for Everyone: How and Why to Adapt Voice Assistantsโ€™ Complexity to Older Adults

    Get PDF
    Technological advancements in the area of artificial intelligence have rapidly improved the performance of speech recognition and natural language processing. These improvements have facilitated the proliferation of voice assistants (VAs), which can understand human speech and provide spoken answers to assist in various tasks. More and more individuals and organizations adopt VAs because they value the naturalness of speech interaction. However, speech interaction is of ephemeral nature and processed in sequential order, which puts cognitive load on the user. Therefore, we investigate the relationship between the complexity of speech interaction and the interaction outcomes enjoyment, satisfaction, and intention to explore. Our results show that this relationship has an inverted U-shape for people with above-median information processing speed (i.e., younger adults) but is negatively linear otherwise. The results contribute to the literature on interface complexity and on the use of IT systems by the elder

    Conversational affective social robots for ageing and dementia support

    Get PDF
    Socially assistive robots (SAR) hold significant potential to assist older adults and people with dementia in human engagement and clinical contexts by supporting mental health and independence at home. While SAR research has recently experienced prolific growth, long-term trust, clinical translation and patient benefit remain immature. Affective human-robot interactions are unresolved and the deployment of robots with conversational abilities is fundamental for robustness and humanrobot engagement. In this paper, we review the state of the art within the past two decades, design trends, and current applications of conversational affective SAR for ageing and dementia support. A horizon scanning of AI voice technology for healthcare, including ubiquitous smart speakers, is further introduced to address current gaps inhibiting home use. We discuss the role of user-centred approaches in the design of voice systems, including the capacity to handle communication breakdowns for effective use by target populations. We summarise the state of development in interactions using speech and natural language processing, which forms a baseline for longitudinal health monitoring and cognitive assessment. Drawing from this foundation, we identify open challenges and propose future directions to advance conversational affective social robots for: 1) user engagement, 2) deployment in real-world settings, and 3) clinical translation

    Promoting Sustainable Mobility Beliefs with Persuasive and Anthropomorphic Design: Insights from an Experiment with a Conversational Agent

    Get PDF
    Sustainable mobility behavior is increasingly relevant due to the vast environmental impact of current transportation systems. With the growing variety of transportation modes, individual decisions for or against specific mobility options become more and more important and salient beliefs regarding the environmental impact of different modes influence this decision process. While information systems have been recognized for their potential to shape individual beliefs and behavior, design-oriented studies that explore their impact, in particular on environmental beliefs, remain scarce. In this study, we contribute to closing this research gap by designing and evaluating a new type of artifact, a persuasive and human-like conversational agent, in a 2x2 experiment with 225 participants. Drawing on the Theory of Planned Behavior and Social Response Theory, we find empirical support for the influence of persuasive design elements on individual environmental beliefs and discover that anthropomorphic design can contribute to increasing the persuasiveness of artifacts

    "Mango Mango, How to Let The Lettuce Dry Without A Spinner?'': Exploring User Perceptions of Using An LLM-Based Conversational Assistant Toward Cooking Partner

    Full text link
    The rapid advancement of the Large Language Model (LLM) has created numerous potentials for integration with conversational assistants (CAs) assisting people in their daily tasks, particularly due to their extensive flexibility. However, users' real-world experiences interacting with these assistants remain unexplored. In this research, we chose cooking, a complex daily task, as a scenario to investigate people's successful and unsatisfactory experiences while receiving assistance from an LLM-based CA, Mango Mango. We discovered that participants value the system's ability to provide extensive information beyond the recipe, offer customized instructions based on context, and assist them in dynamically planning the task. However, they expect the system to be more adaptive to oral conversation and provide more suggestive responses to keep users actively involved. Recognizing that users began treating our LLM-CA as a personal assistant or even a partner rather than just a recipe-reading tool, we propose several design considerations for future development.Comment: Under submission to CHI202
    • โ€ฆ
    corecore