2,635 research outputs found

    Promoting Vices : An introduction to research on the advertising of coercive products

    Get PDF
    This book provides an introduction to the study of advertising products that jurisdictions typically regulate due to their potential harmfulness to health and well-being. Examples come from studies on the advertising of alcohol, tobacco, sugary foods, and gambling. It maps the most usual dilemma formulations and approaches employed by researchers. It addresses the subject also from the perspectives of new mediatized life, merged genres, synchronized communication technologies, and fuzzy borders between producers and consumers of commercial messages. The book suggests four problematization foci that the research has typically concerned: content, effect, vulnerable groups, and, policy. It portrays a research field that is underpinned by notions of the harmful effects caused by marketing and of the relevance of pointing out this circumstance in order to achieve a political change. After each chapter the book summarizes some take-home messages. It concludes by providing recommendations to researchers who want to take on tasks in this area of research.Peer reviewe

    Data analytics 2016: proceedings of the fifth international conference on data analytics

    Get PDF

    Assessing the impacts of digital government transformation in the EU

    Get PDF
    This report presents the results of the conceptual and empirical work conducted as part of the JRC research on โ€œExploring Digital Government Transformation: understanding public sector innovation in a data-driven societyโ€ conducted within the framework of the โ€œEuropean Location Interoperability Solutions for eGovernment (ELISE)" Action of the ISA2 Programme on Interoperability solutions for public administrations, businesses and citizens, coordinated by DIGIT. Building on the systematisation of the state of the art carried out in the previous phase of the research, the report presents an original conceptual framework for assessing the impacts of Digital Government transformation in the EU and discusses the results of case studies carried out using an experimental or quasi-experimental approach to test and validate it, carried out in different policy areas in various EU countries. The report concludes outlining the final proposal of DigiGov F 2.0, which defines the dimensions and elements of analysis for assessing the effects that can be generated by digital innovation in the public sector and the impacts they have at social, economic and political levels in different policy-cycle phases and governance contexts.JRC.B.6-Digital Econom

    The Impacts of the Relation between Users and Software Agents in Delegated Negotiation: A Control Perspective

    Get PDF
    Software agents are being increasingly applied to e-commerce activities, including commerce negotiations. Agents can be used to conduct negotiation tasks on behalf of users. When users delegate negotiation tasks to agents, information technology plays a role in determining social affairs. The locus of control over social affairs partially shifts from human participants to technology. When this negotiation approach is adopted, an important question arises: how will users treat and assess their agents when they delegate negotiations to agents? It is challenging to develop agents that are able to connect with users in meaningful ways. This thesis argues that users will not treat their negotiating agents in the same manner as they treat classical computer-enabled tools or aids, because of the autonomy of the agents. When assessing agents, users will be heavily oriented towards their relationships with the agents. Drawing on several streams of literature, this thesis proposes that the notion of control helps to characterize the relationships between users and agents. Usersโ€™ experienced control will influence their assessments and adoption of their negotiating agents. Usersโ€™ experienced control can connect to instrumental control, which is a set of means that empowers the interaction between users and agents. An experiment was conducted in order to test these propositions. The experiment results provide support for the propositions

    The Influence of Education and Experience upon Contextual and Task Performance in Warehouse Operations

    Get PDF
    Supply chain workers make observable, preventable errors while completing their assigned tasks in the shipping process. Previous research has indicated that individuals with a greater grasp of their work and better system knowledge are less likely to commit interpretation errors. We believe worker-performance may, likewise, be affected by an individuals knowledge of why and where they fit into a larger system defined as mission knowledge. To assess our research objectives, we conduct a controlled experiment with 100 workers in the Air Force supply career field to discern how mission clarity, that is, education, experience and subject characteristics affect pick and pack errors in simulated warehouse order fulfillment tasks. Results indicate that participants who received the experience treatment committed fewer errors, resulting in increased task performance

    Ancient and historical systems

    Get PDF

    ์ •์‹ ๊ฑด๊ฐ•์—์„œ ์‚ฌ์šฉ์ž ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ์™€ ์ž์•„์„ฑ์ฐฐ์„ ์ง€์›ํ•˜๋Š” ๋Œ€ํ™”ํ˜• ์—์ด์ „ํŠธ ๋””์ž์ธ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์œตํ•ฉ๊ณผํ•™๊ธฐ์ˆ ๋Œ€ํ•™์› ์œตํ•ฉ๊ณผํ•™๋ถ€(๋””์ง€ํ„ธ์ •๋ณด์œตํ•ฉ์ „๊ณต), 2020. 8. ์„œ๋ด‰์›.In the advent of artificial intelligence (AI), we are surrounded by technological gadgets, devices and intelligent personal assistant (IPAs) that voluntarily take care of our home, work and social networks. They help us manage our life for the better, or at least that is what they are designed for. As a matter of fact, few are, however, designed to help us grapple with the thoughts and feelings that often construct our living. In other words, technologies hardly help us think. How can they be designed to help us reflect on ourselves for the better? In the simplest terms, self-reflection refers to thinking deeply about oneself. When we think deeply about ourselves, there can be both positive and negative consequences. On the one hand, reflecting on ourselves can lead to a better self-understanding, helping us achieve life goals. On the other hand, we may fall into brooding and depression. The sad news is that the two are usually intertwined. The problem, then, is the irony that reflecting on oneself by oneself is not easy. To tackle this problem, this work aims to design technology in the form of a conversational agent, or a chatbot, to encourage a positive self-reflection. Chatbots are natural language interfaces that interact with users in text. They work at the tip of our hands as if SMS or instant messaging, from flight reservation and online shopping to news service and healthcare. There are even chatbot therapists offering psychotherapy on mobile. That machines can now talk to us creates an opportunity for designing a natural interaction that used to be humans own. This work constructs a two-dimensional design space for translating self-reflection into a human-chatbot interaction, with user self-disclosure and chatbot guidance. Users confess their thoughts and feelings to the bot, and the bot is to guide them in the scaffolding process. Previous work has established an extensive line of research on the therapeutic effect of emotional disclosure. In HCI, reflection design has posited the need for guidance, e.g. scaffolding users thoughts, rather than assuming their ability to reflect in a constructive manner. The design space illustrates different reflection processes depending on the levels of user disclosure and bot guidance. Existing reflection technologies have most commonly provided minimal levels of disclosure and guidance, and healthcare technologies the opposite. It is the aim of this work to investigate the less explored space by designing chatbots called Bonobot and Diarybot. Bonobot differentiates itself from other bot interventions in that it only motivates the idea of change rather than direct engagement. Diarybot is designed in two chat versions, Basic and Responsive, which create novel interactions for reflecting on a difficult life experience by explaining it to and exploring it with a chatbot. These chatbots are set up for a user study with 30 participants, to investigate the user experiences of and responses to design strategies. Based on the findings, challenges and opportunities from designing for chatbot-guided reflection are explored. The findings of this study are as follows. First, participants preferred Bonobots questions that prompted the idea of change. Its responses were also appreciated, but only when they conveyed accurate empathy. Thus questions, coupled with empathetic responses, could serve as a catalyst for disclosure and even a possible change of behavior, a motivational boost. Yet the chatbot-led interaction led to surged user expectations for the bot. Participants demanded more than just the guidance, such as solutions and even superhuman intelligence. Potential tradeoff between user engagement and autonomy in designing human-AI partnership is discussed. Unlike Bonobot, Diarybot was designed with less guidance to encourage users own narrative making. In both Diarybot chats, the presence of a bot could make it easier for participants to share the most difficult life experiences, compared to a no-chatbot writing condition. Yet an increased interaction with the bot in Responsive chat could lead to a better user engagement. On the contrary, more emotional expressiveness and ease of writing were observed with little interaction in Basic chat. Coupled with qualitative findings that reveal user preference for varied interactions and tendency to adapt to bot patterns, predictability and transparency of designing chatbot interaction are discussed in terms of managing user expectations in human-AI interaction. In sum, the findings of this study shed light on designing human-AI interaction. Chatbots can be a potential means of supporting guided disclosure on lifes most difficult experiences. Yet the interaction between a machine algorithm and an innate human cognition bears interesting questions for the HCI community, especially in terms of user autonomy, interface predictability, and design transparency. Discussing the notion of algorithmic affordances in AI agents, this work proposes meaning-making as novel interaction design metaphor: In the symbolic interaction via language, AI nudges users, which inspires and engages users in their pursuit of making sense of lifes agony. Not only does this metaphor respect user autonomy but also it maintains the veiled workings of AI from users for continued engagement. This work makes the following contributions. First, it designed and implemented chatbots that can provide guidance to encourage user narratives in self-reflection. Next, it offers empirical evidence on chatbot-guided disclosure and discusses implications for tensions and challenges in design. Finally, this work proposes meaning-making as a novel design metaphor. It calls for the responsible design of intelligent interfaces for positive reflection in pursuit of psychological wellbeing, highlighting algorithmic affordances and interpretive process of human-AI interaction.์ตœ๊ทผ ์ธ๊ณต์ง€๋Šฅ(Artificial Intelligence; AI) ๊ธฐ์ˆ ์€ ์šฐ๋ฆฌ ์‚ถ์˜ ๋ฉด๋ฉด์„ ๋งค์šฐ ๋น ๋ฅด๊ฒŒ ๋ฐ”๊ฟ”๋†“๊ณ  ์žˆ๋‹ค. ํŠนํžˆ ์• ํ”Œ์˜ ์‹œ๋ฆฌ(Siri)์™€ ๊ตฌ๊ธ€ ์–ด์‹œ์Šคํ„ดํŠธ (Google Assistant) ๋“ฑ ์ž์—ฐ์–ด ์ธํ„ฐํŽ˜์ด์Šค(natural language interfaces)์˜ ํ™•์žฅ์€ ๊ณง ์ธ๊ณต์ง€๋Šฅ ์—์ด์ „ํŠธ์™€์˜ ๋Œ€ํ™”๊ฐ€ ์ธํ„ฐ๋ž™์…˜์˜ ์ฃผ์š” ์ˆ˜๋‹จ์ด ๋  ๊ฒƒ์ž„์„ ๋Šฅํžˆ ์ง์ž‘์ผ€ ํ•œ๋‹ค. ์‹ค์ƒ ์ธ๊ณต์ง€๋Šฅ ์—์ด์ „ํŠธ๋Š” ์‹ค์ƒํ™œ์—์„œ ์ฝ˜ํ…์ธ  ์ถ”์ฒœ๊ณผ ์˜จ๋ผ์ธ ์‡ผํ•‘ ๋“ฑ ๋‹ค์–‘ํ•œ ์„œ๋น„์Šค๋ฅผ ์ œ๊ณตํ•˜๊ณ  ์žˆ์ง€๋งŒ, ์ด๋“ค์˜ ๋Œ€๋ถ€๋ถ„์€ ๊ณผ์—…-์ง€ํ–ฅ์ ์ด๋‹ค. ์ฆ‰ ์ธ๊ณต์ง€๋Šฅ์€ ์šฐ๋ฆฌ์˜ ์‚ถ์„ ํŽธ๋ฆฌํ•˜๊ฒŒ ํ•˜์ง€๋งŒ, ๊ณผ์—ฐ ํŽธ์•ˆํ•˜๊ฒŒ ํ•  ์ˆ˜ ์žˆ๋Š”๊ฐ€? ๋ณธ ์—ฐ๊ตฌ๋Š” ํŽธํ•˜์ง€๋งŒ ํŽธํ•˜์ง€ ์•Š์€ ํ˜„๋Œ€์ธ์„ ์œ„ํ•œ ๊ธฐ์ˆ ์˜ ์—ญํ• ์„ ๊ณ ๋ฏผํ•˜๋Š” ๋ฐ์—์„œ ์ถœ๋ฐœํ•œ๋‹ค. ์ž์•„์„ฑ์ฐฐ(self-reflection), ์ฆ‰ ์ž์‹ ์— ๋Œ€ํ•ด ๊นŠ์ด ์ƒ๊ฐํ•ด ๋ณด๋Š” ํ™œ๋™์€ ์ž๊ธฐ์ธ์‹๊ณผ ์ž๊ธฐ์ดํ•ด๋ฅผ ๋„๋ชจํ•˜๊ณ  ๋ฐฐ์›€๊ณผ ๋ชฉํ‘œ์˜์‹์„ ๊ณ ์ทจํ•˜๋Š” ๋“ฑ ๋ถ„์•ผ๋ฅผ ๋ง‰๋ก ํ•˜๊ณ  ๋„๋ฆฌ ์—ฐ๊ตฌ ๋ฐ ์ ์šฉ๋˜์–ด ์™”๋‹ค. ํ•˜์ง€๋งŒ ์ž์•„์„ฑ์ฐฐ์˜ ๊ฐ€์žฅ ํฐ ์–ด๋ ค์›€์€ ์Šค์Šค๋กœ ๊ฑด์„ค์ ์ธ ์„ฑ์ฐฐ์„ ๋„๋ชจํ•˜๊ธฐ ํž˜๋“ค๋‹ค๋Š” ๊ฒƒ์ด๋‹ค. ํŠนํžˆ, ๋ถ€์ •์ ์ธ ๊ฐ์ •์  ๊ฒฝํ—˜์— ๋Œ€ํ•œ ์ž์•„์„ฑ์ฐฐ์€ ์ข…์ข… ์šฐ์šธ๊ฐ๊ณผ ๋ถˆ์•ˆ์„ ๋™๋ฐ˜ํ•œ๋‹ค. ๊ทน๋ณต์ด ํž˜๋“  ๊ฒฝ์šฐ ์ƒ๋‹ด ๋˜๋Š” ์น˜๋ฃŒ๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์ง€๋งŒ, ์‚ฌํšŒ์  ๋‚™์ธ๊ณผ ์žฃ๋Œ€์˜ ๋ถ€๋‹ด๊ฐ์œผ๋กœ ๊บผ๋ ค์ง€๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋‹ค์ˆ˜์ด๋‹ค. ์„ฑ์ฐฐ ๋””์ž์ธ(Reflection Design)์€ ์ธ๊ฐ„-์ปดํ“จํ„ฐ์ƒํ˜ธ์ž‘์šฉ(HCI)์˜ ์˜ค๋žœ ํ™”๋‘๋กœ, ๊ทธ๋™์•ˆ ํšจ๊ณผ์ ์ธ ์„ฑ์ฐฐ์„ ๋„์šธ ์ˆ˜ ์žˆ๋Š” ๋””์ž์ธ ์ „๋žต๋“ค์ด ๋‹ค์ˆ˜ ์—ฐ๊ตฌ๋˜์–ด ์™”์ง€๋งŒ ๋Œ€๋ถ€๋ถ„ ๋‹ค์–‘ํ•œ ์‚ฌ์šฉ์ž ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘ ์ „๋žต์„ ํ†ตํ•ด ๊ณผ๊ฑฐ ํšŒ์ƒ ๋ฐ ํ•ด์„์„ ๋•๋Š” ๋ฐ ๊ทธ์ณค๋‹ค. ์ตœ๊ทผ ์†Œ์œ„ ์ฑ—๋ด‡ ์ƒ๋‹ด์‚ฌ๊ฐ€ ๋“ฑ์žฅํ•˜์—ฌ ์‹ฌ๋ฆฌ์ƒ๋‹ด๊ณผ ์น˜๋ฃŒ ๋ถ„์•ผ์— ์ ์šฉ๋˜๊ณ  ์žˆ์ง€๋งŒ, ์ด ๋˜ํ•œ ์„ฑ์ฐฐ์„ ๋•๊ธฐ๋ณด๋‹ค๋Š” ํšจ์œจ์ ์ธ ์ฒ˜์น˜ ๋„๊ตฌ์— ๋จธ๋ฌด๋ฅด๊ณ  ์žˆ์„ ๋ฟ์ด๋‹ค. ์ฆ‰ ๊ธฐ์ˆ ์€ ์น˜๋ฃŒ ์ˆ˜๋‹จ์ด๊ฑฐ๋‚˜ ์„ฑ์ฐฐ์˜ ๋Œ€์ƒ์ด ๋˜์ง€๋งŒ, ๊ทธ ๊ณผ์ •์— ๊ฐœ์ž…ํ•˜๋Š” ๊ฒฝ์šฐ๋Š” ์ œํ•œ์ ์ด๋ผ๊ณ  ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด์— ๋ณธ ์—ฐ๊ตฌ๋Š” ์„ฑ์ฐฐ ๋™๋ฐ˜์ž๋กœ์„œ ๋Œ€ํ™”ํ˜• ์—์ด์ „ํŠธ์ธ ์ฑ—๋ด‡์„ ๋””์ž์ธํ•  ๊ฒƒ์„ ์ œ์•ˆํ•œ๋‹ค. ์ด ์ฑ—๋ด‡์˜ ์—ญํ• ์€ ์‚ฌ์šฉ์ž์˜ ๋ถ€์ •์ ์ธ ๊ฐ์ •์  ๊ฒฝํ—˜ ๋˜๋Š” ํŠธ๋ผ์šฐ๋งˆ์— ๋Œ€ํ•ด ์ด์•ผ๊ธฐํ•  ์ˆ˜ ์žˆ๋„๋ก ๋„์šธ ๋ฟ ์•„๋‹ˆ๋ผ, ๊ทธ ๊ณผ์ •์—์„œ ๋ฐ˜์ถ”๋ฅผ ํ†ต์ œํ•˜์—ฌ ๊ฑด์„ค์ ์ธ ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ๋ฅผ ์ด๋Œ์–ด ๋‚ด๋Š” ๊ฐ€์ด๋“œ๋ฅผ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ์ด๋Ÿฌํ•œ ์ฑ—๋ด‡์„ ์„ค๊ณ„ํ•˜๊ธฐ ์œ„ํ•ด, ์„ ํ–‰ ์—ฐ๊ตฌ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์‚ฌ์šฉ์ž์˜ ์ž๊ธฐ๋…ธ์ถœ(user self-disclosure)๊ณผ ์ฑ—๋ด‡ ๊ฐ€์ด๋“œ(guidance)๋ฅผ ๋‘ ์ถ•์œผ๋กœ ํ•œ ๋””์ž์ธ ๊ณต๊ฐ„(design space)์„ ์ •์˜ํ•˜์˜€๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ž๊ธฐ๋…ธ์ถœ๊ณผ ๊ฐ€์ด๋“œ์˜ ์ •๋„์— ๋”ฐ๋ฅธ ๋„ค ๊ฐ€์ง€ ์ž์•„์„ฑ์ฐฐ ๊ฒฝํ—˜์„ ๋ถ„๋ฅ˜ํ•˜์˜€๋‹ค: ์ž๊ธฐ๋…ธ์ถœ๊ณผ ๊ฐ€์ด๋“œ๊ฐ€ ์ตœ์†Œํ™”๋œ ํšŒ์ƒ ๊ณต๊ฐ„, ์ž๊ธฐ๋…ธ์ถœ์ด ์œ„์ฃผ์ด๊ณ  ๊ฐ€์ด๋“œ๊ฐ€ ์ตœ์†Œํ™”๋œ ์„ค๋ช… ๊ณต๊ฐ„, ์ž๊ธฐ๋…ธ์ถœ๊ณผ ์ฑ—๋ด‡์ด ์ด๋„๋Š” ๊ฐ€์ด๋“œ๊ฐ€ ํ˜ผํ•ฉ๋œ ํƒ์ƒ‰ ๊ณต๊ฐ„, ๊ฐ€์ด๋“œ๋ฅผ ์ ๊ทน ๊ฐœ์ž…์‹œ์ผœ ์ž๊ธฐ๋…ธ์ถœ์„ ๋†’์ด๋Š” ๋ณ€ํ™” ๊ณต๊ฐ„์ด ๊ทธ๊ฒƒ์ด๋‹ค. ๋ณธ ์—ฐ๊ตฌ์˜ ๋ชฉํ‘œ๋Š” ์ƒ์ˆ ๋œ ๋””์ž์ธ ๊ณต๊ฐ„์—์„œ์˜ ์„ฑ์ฐฐ ๊ฒฝํ—˜๊ณผ ๊ณผ์ •์„ ๋•๋Š” ์ฑ—๋ด‡์„ ๊ตฌํ˜„ํ•˜๊ณ , ์‚ฌ์šฉ์ž ์‹คํ—˜์„ ํ†ตํ•ด ์„ฑ์ฐฐ ๊ฒฝํ—˜๊ณผ ๋””์ž์ธ ์ „๋žต์— ๋Œ€ํ•œ ๋ฐ˜์‘์„ ์ˆ˜์ง‘ ๋ฐ ๋ถ„์„ํ•จ์œผ๋กœ์จ ์ฑ—๋ด‡ ๊ธฐ๋ฐ˜์˜ ์ž์•„ ์„ฑ์ฐฐ ์ธํ„ฐ๋ž™์…˜์„ ์ƒˆ๋กญ๊ฒŒ ์ œ์‹œํ•˜๊ณ  ์ด์— ๋Œ€ํ•œ ์‹ค์ฆ์  ๊ทผ๊ฑฐ๋ฅผ ๋งˆ๋ จํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ํ˜„์žฌ๊นŒ์ง€ ๋งŽ์€ ์„ฑ์ฐฐ ๊ธฐ์ˆ ์€ ํšŒ์ƒ์— ์ง‘์ค‘๋˜์–ด ์žˆ๊ธฐ์—, ๋‚˜๋จธ์ง€ ์„ธ ๊ณต๊ฐ„์—์„œ์˜ ์„ฑ์ฐฐ์„ ์ง€์›ํ•˜๋Š” ๋ณด๋…ธ๋ด‡๊ณผ ๊ธฐ๋ณธํ˜•๋ฐ˜์‘ํ˜• ์ผ๊ธฐ๋ด‡์„ ๋””์ž์ธํ•˜์˜€๋‹ค. ๋˜ํ•œ, ์‚ฌ์šฉ์ž ํ‰๊ฐ€๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ๋„์ถœํ•œ ์—ฐ๊ตฌ๊ฒฐ๊ณผ๋ฅผ ํ†ตํ•ด ๋„๋ž˜ํ•œ ์ธ๊ฐ„-์ธ๊ณต์ง€๋Šฅ ์ƒํ˜ธ์ž‘์šฉ(human-AI interaction)์˜ ๋งฅ๋ฝ์—์„œ ์„ฑ์ฐฐ ๋™๋ฐ˜์ž๋กœ์„œ์˜ ์ฑ—๋ด‡ ๊ธฐ์ˆ ์ด ๊ฐ–๋Š” ์˜๋ฏธ์™€ ์—ญํ• ์„ ํƒ๊ตฌํ•œ๋‹ค. ๋ณด๋…ธ๋ด‡๊ณผ ์ผ๊ธฐ๋ด‡์€ ์ธ๊ฐ„์ค‘์‹ฌ์ƒ๋‹ด๊ณผ ๋Œ€ํ™”๋ถ„์„์˜ ์ด๋ก ์  ๊ทผ๊ฑฐ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ํ•œ ์ •์„œ์ง€๋Šฅ(emotional intelligence)๊ณผ ์ ˆ์ฐจ์ง€๋Šฅ(proecedural intelligence)์„ ํ•ต์‹ฌ ์ถ•์œผ๋กœ, ๋Œ€ํ™” ํ๋ฆ„ ์ œ์–ด(flow manager)์™€ ๋ฐœํ™” ์ƒ์„ฑ(response generator)์„ ํ•ต์‹ฌ ๋ชจ๋“ˆ๋กœ ๊ตฌํ˜„ํ•˜์˜€๋‹ค. ๋จผ์ €, ๋ณด๋…ธ๋ด‡์€ ๋™๊ธฐ๊ฐ•ํ™”์ƒ๋‹ด(motivational interviewing)์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ณ ๋ฏผ๊ณผ ์ŠคํŠธ๋ ˆ์Šค์— ๋Œ€ํ•œ ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ๋ฅผ ์ด๋Œ์–ด๋‚ด์–ด, ์ด์— ๋Œ€ํ•œ ํ•ด๊ฒฐ์„ ์œ„ํ•œ ๊ฐ€์ด๋“œ ์งˆ๋ฌธ์„ ํ†ตํ•ด ๋ณ€ํ™”๋ฅผ ์œ„ํ•œ ์„ฑ์ฐฐ์„ ๋•๋Š”๋‹ค. ์ฑ—๋ด‡์˜ ๊ตฌํ˜„์„ ์œ„ํ•ด, ๋™๊ธฐ๊ฐ•ํ™”์ƒ๋‹ด์˜ ๋„ค ๋‹จ๊ณ„ ๋Œ€ํ™”๋ฅผ ์„ค์ •ํ•˜๊ณ  ๊ฐ ๋‹จ๊ณ„๋ฅผ ๊ตฌ์„ฑํ•  ์ˆ˜ ์žˆ๋Š” ์ƒ๋‹ด์‚ฌ ๋ฐœํ™” ํ–‰๋™์„ ๊ด€๋ จ๋ฌธํ—Œ์—์„œ ์ˆ˜์ง‘ ๋ฐ ์ „์ฒ˜๋ฆฌ ๊ณผ์ •์„ ๊ฑฐ์ณ ์Šคํฌ๋ฆฝํŠธํ™”ํ•˜์˜€๋‹ค. ๋˜ํ•œ, ์‚ฌ์ „ ์ „์ฒ˜๋ฆฌ๋œ ๋ฌธ์žฅ์ด ๋งฅ๋ฝ์„ ์œ ์ง€ํ•  ์ˆ˜ ์žˆ๋Š” ๋Œ€ํ™”์— ์“ฐ์ผ ์ˆ˜ ์žˆ๋„๋ก, ๋Œ€ํ™”์˜ ์ฃผ์ œ๋Š” ๋Œ€ํ•™์›์ƒ์˜ ์–ด๋ ค์›€์œผ๋กœ ํ•œ์ •ํ•˜์˜€๋‹ค. ๋ณด๋…ธ๋ด‡๊ณผ์˜ ๋Œ€ํ™”๊ฐ€ ์‚ฌ์šฉ์ž์˜ ์„ฑ์ฐฐ์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ๊ณผ ์ด์— ๋Œ€ํ•œ ์ธ์‹์„ ํƒ์ƒ‰ํ•˜๊ธฐ ์œ„ํ•ด ์งˆ์  ์—ฐ๊ตฌ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ 30๋ช…์˜ ๋Œ€ํ•™์›์ƒ๊ณผ ์‚ฌ์šฉ์ž ์‹คํ—˜์„ ์ง„ํ–‰ํ•˜์˜€๋‹ค. ์‹คํ—˜๊ฒฐ๊ณผ, ์‚ฌ์šฉ์ž๋Š” ๋ณ€ํ™” ๋Œ€ํ™”๋ฅผ ์œ ๋„ํ•  ์ˆ˜ ์žˆ๋Š” ๋‹ค์–‘ํ•œ ํƒ์ƒ‰ ์งˆ๋ฌธ์„ ์„ ํ˜ธํ•˜์˜€๋‹ค. ๋˜ํ•œ, ์‚ฌ์šฉ์ž์˜ ๋งฅ๋ฝ์— ์ •ํ™•ํžˆ ๋“ค์–ด๋งž๋Š” ์งˆ๋ฌธ๊ณผ ํ”ผ๋“œ๋ฐฑ์€ ์‚ฌ์šฉ์ž๋ฅผ ๋”์šฑ ์ ๊ทน์ ์ธ ์ž๊ธฐ ๋…ธ์ถœ๋กœ ์ด๋Œ๊ฒŒ ํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ฐœ๊ฒฌํ•˜์˜€๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ฑ—๋ด‡์ด ๋งˆ์น˜ ์ƒ๋‹ด์‚ฌ์ฒ˜๋Ÿผ ๋Œ€ํ™”๋ฅผ ์ด๋Œ์–ด๊ฐˆ ๊ฒฝ์šฐ, ๋†’์•„์ง„ ์‚ฌ์šฉ์ž์˜ ๊ธฐ๋Œ€ ์ˆ˜์ค€์œผ๋กœ ์ธํ•ด ์ผ๋ถ€ ์‚ฌ์šฉ์ž๊ฐ€ ๋ณ€ํ™”์— ๋Œ€ํ•œ ๋™๊ธฐ๋ฅผ ํ‘œ์ถœํ•˜์˜€์Œ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ  ๋ณ€ํ™”์— ๋Œ€ํ•œ ์ž์œจ์„ฑ์„ ์ฑ—๋ด‡์— ์–‘๋„ํ•˜๋ ค๋Š” ๋ชจ์Šต ๋˜ํ•œ ๋‚˜ํƒ€๋‚จ์„ ๋ถ„์„ํ•˜์˜€๋‹ค. ๋ณด๋…ธ๋ด‡ ์—ฐ๊ตฌ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ์ผ๊ธฐ๋ด‡์€ ์ฑ—๋ด‡ ๋Œ€์‹  ์‚ฌ์šฉ์ž๊ฐ€ ๋ณด๋‹ค ์ ๊ทน์ ์œผ๋กœ ์„ฑ์ฐฐ ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ๋ฅผ ์ „๊ฐœํ•  ์ˆ˜ ์žˆ๋„๋ก ๋””์ž์ธํ•˜์˜€๋‹ค. ์ผ๊ธฐ๋ด‡์€ ํŠธ๋ผ์šฐ๋งˆ์— ๋Œ€ํ•œ ํ‘œํ˜„์  ๊ธ€์“ฐ๊ธฐ๋ฅผ ์ง€์›ํ•˜๋Š” ์ฑ—๋ด‡์œผ๋กœ, ๊ธฐ๋ณธํ˜• ๋˜๋Š” ๋ฐ˜์‘ํ˜• ๋Œ€ํ™”๋ฅผ ์ œ๊ณตํ•œ๋‹ค. ๊ธฐ๋ณธํ˜• ๋Œ€ํ™”๋Š” ํŠธ๋ผ์šฐ๋งˆ์— ๋Œ€ํ•ด ์ž์œ ๋กญ๊ฒŒ ์„ค๋ช…ํ•  ์ˆ˜ ์žˆ๋Š” ๋Œ€ํ™” ํ™˜๊ฒฝ์„ ์ œ๊ณตํ•˜๊ณ , ๋ฐ˜์‘ํ˜• ๋Œ€ํ™”๋Š” ์‚ฌ์šฉ์ž๊ฐ€ ์ž‘์„ฑํ•œ ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ์— ๋Œ€ํ•œ ํ›„์† ์ธํ„ฐ๋ž™์…˜์„ ํ†ตํ•ด ๊ณผ๊ฑฐ์˜ ๊ฒฝํ—˜์„ ์žฌํƒ์ƒ‰ํ•˜๋„๋ก ํ•˜์˜€๋‹ค. ๋˜ํ•œ, ํ›„์† ์ธํ„ฐ๋ž™์…˜์˜ ๋ฐœํ™” ํ–‰๋™์€ ๋‹ค์–‘ํ•œ ์ƒ๋‹ด์น˜๋ฃŒ์—์„œ ๋ฐœ์ทŒํ•˜๋˜ ์œ ์ €์˜ ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ์—์„œ ์ถ”์ถœํ•œ ๊ฐ์ •์–ด ๋ฐ ์ธ๊ฐ„๊ด€๊ณ„ ํ‚ค์›Œ๋“œ๋ฅผ ํ™œ์šฉํ•˜๋„๋ก ํ•˜์˜€๋‹ค. ๊ฐ ์ผ๊ธฐ๋ด‡์— ๋Œ€ํ•œ ๋ฐ˜์‘์„ ๋น„๊ต ๋ถ„์„ํ•˜๊ธฐ ์œ„ํ•ด, ์ฑ—๋ด‡ ์—†์ด ๋„ํ๋จผํŠธ์— ํ‘œํ˜„์  ๊ธ€์“ฐ๊ธฐ ํ™œ๋™๋งŒ์„ ํ•˜๋Š” ๋Œ€์กฐ๊ตฐ์„ ์„ค์ •ํ•˜๊ณ  30๋ช…์˜ ์‚ฌ์šฉ์ž๋ฅผ ๋ชจ์ง‘ํ•˜์—ฌ ๊ฐ ์กฐ๊ฑด์— ๋žœ๋ค์œผ๋กœ ๋ฐฐ์ •, ์„ค๋ฌธ๊ณผ ๋ฉด๋‹ด์„ ๋™๋ฐ˜ํ•œ 4์ผ๊ฐ„์˜ ๊ธ€์“ฐ๊ธฐ ์‹คํ—˜์„ ์ง„ํ–‰ํ•˜์˜€๋‹ค. ์‹คํ—˜๊ฒฐ๊ณผ, ์‚ฌ์šฉ์ž๋Š” ์ผ๊ธฐ๋ด‡๊ณผ์˜ ์ธํ„ฐ๋ž™์…˜์„ ํ†ตํ•ด ๋ณด์ด์ง€ ์•Š๋Š” ๊ฐ€์ƒ์˜ ์ฒญ์ž๋ฅผ ์ƒ์ƒํ•จ์œผ๋กœ์จ ๊ธ€์“ฐ๊ธฐ๋ฅผ ๋Œ€ํ™” ํ™œ๋™์œผ๋กœ ์ธ์ง€ํ•˜๊ณ  ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ํŠนํžˆ, ๋ฐ˜์‘ํ˜• ๋Œ€ํ™”์˜ ํ›„์† ์งˆ๋ฌธ๋“ค์€ ์‚ฌ์šฉ์ž๋กœ ํ•˜์—ฌ๊ธˆ ์ƒํ™ฉ์„ ๊ฐ๊ด€ํ™”ํ•˜๊ณ  ์ƒˆ๋กœ์šด ๊ด€์ ์œผ๋กœ ์ƒ๊ฐํ•ด ๋ณผ ์ˆ˜ ์žˆ๋Š” ํšจ๊ณผ๋ฅผ ๊ฑฐ๋‘์—ˆ๋‹ค. ๋ฐ˜์‘ํ˜• ๋Œ€ํ™”์—์„œ ํ›„์† ์ธํ„ฐ๋ž™์…˜์„ ๊ฒฝํ—˜ํ•œ ์‚ฌ์šฉ์ž๋Š” ์ผ๊ธฐ๋ด‡์˜ ์ธ์ง€๋œ ์ฆ๊ฑฐ์›€๊ณผ ์‚ฌํšŒ์„ฑ, ์‹ ๋ขฐ๋„์™€ ์žฌ์‚ฌ์šฉ ์˜ํ–ฅ์— ๋Œ€ํ•œ ํ‰๊ฐ€๊ฐ€ ๋‹ค๋ฅธ ๋‘ ์กฐ๊ฑด์—์„œ๋ณด๋‹ค ์œ ์˜ํ•˜๊ฒŒ ๋†’์•˜๋‹ค. ๋ฐ˜๋ฉด, ๊ธฐ๋ณธํ˜• ๋Œ€ํ™” ์ฐธ์—ฌ์ž๋Š” ๋‹ค๋ฅธ ๋‘ ์กฐ๊ฑด์—์„œ๋ณด๋‹ค ๊ฐ์ •์  ํ‘œํ˜„์˜ ์šฉ์ด์„ฑ๊ณผ ๊ธ€์“ฐ๊ธฐ์˜ ์–ด๋ ค์›€์„ ๊ฐ๊ฐ ์œ ์˜ํ•˜๊ฒŒ ๋†’๊ฒŒ, ๊ทธ๋ฆฌ๊ณ  ๋‚ฎ๊ฒŒ ํ‰๊ฐ€ํ•˜์˜€๋‹ค. ์ฆ‰, ์ฑ—๋ด‡์€ ๋งŽ์€ ์ธํ„ฐ๋ž™์…˜ ์—†์ด๋„ ์ฒญ์ž์˜ ์—ญํ• ์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์—ˆ์ง€๋งŒ, ํ›„์† ์งˆ๋ฌธ์„ ํ†ตํ•œ ์ธํ„ฐ๋ž™์…˜์ด ๊ฐ€๋Šฅํ–ˆ๋˜ ๋ฐ˜์‘ํ˜• ๋Œ€ํ™”๋Š” ๋”์šฑ ์ ๊ทน์ ์ธ ์œ ์ € ์ฐธ์—ฌ(engagement)๋ฅผ ์ด๋Œ์–ด๋‚ผ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋˜ํ•œ, ์‹คํ—˜์ด ์ง„ํ–‰๋จ์— ๋”ฐ๋ผ, ์‚ฌ์šฉ์ž๊ฐ€ ๋ฐ˜์‘ํ˜• ์ผ๊ธฐ๋ด‡์˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ์ž์‹ ์˜ ๊ธ€์“ฐ๊ธฐ ์ฃผ์ œ์™€ ๋‹จ์–ด ์„ ํƒ ๋“ฑ์„ ๋งž๊ฒŒ ๋ฐ”๊พธ์–ด ๊ฐ€๋Š” ์ ์‘์ (adaptive) ํ–‰๋™์ด ๊ด€์ฐฐ๋˜์—ˆ๋‹ค. ์•ž์„  ์—ฐ๊ตฌ๊ฒฐ๊ณผ๋ฅผ ํ†ตํ•ด, ๋‹ค์–‘ํ•œ ์ฑ—๋ด‡ ๋””์ž์ธ ์ „๋žต์„ ๋ฐ”ํƒ•์œผ๋กœ ์‚ฌ์šฉ์ž์˜ ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ๊ฐ€ ๋‹ค๋ฅด๊ฒŒ ์œ ๋„๋  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๋”ฐ๋ผ์„œ ์„œ๋กœ ๋‹ค๋ฅธ ์œ ํ˜•์˜ ์„ฑ์ฐฐ ๊ฒฝํ—˜์„ ์ด๋Œ์–ด๋‚ผ ์ˆ˜ ์žˆ์Œ์„ ๋ฐœ๊ฒฌํ•˜์˜€๋‹ค. ๋˜ํ•œ, ์ž์œจ์ ์ธ ํ–‰์œ„์ธ ์ž์•„์„ฑ์ฐฐ์ด ๊ธฐ์ˆ ๊ณผ์˜ ์ƒํ˜ธ์ž‘์šฉ์œผ๋กœ ํ˜ธํ˜œ์  ์„ฑ์งˆ์„ ๊ฐ–๊ฒŒ ๋  ๋•Œ ์‚ฌ์šฉ์ž์˜ ์ž์œจ์„ฑ, ์ƒํ˜ธ์ž‘์šฉ์˜ ์˜ˆ์ธก๊ฐ€๋Šฅ์„ฑ๊ณผ ๋””์ž์ธ ํˆฌ๋ช…์„ฑ์—์„œ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐˆ๋“ฑ๊ด€๊ณ„(tensions)๋ฅผ ํƒ์ƒ‰ํ•˜๊ณ  ์ธ๊ณต์ง€๋Šฅ ์—์ด์ „ํŠธ์˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜ ์–ดํฌ๋˜์Šค(algorithmic affordances)๋ฅผ ๋…ผ์˜ํ•˜์˜€๋‹ค. ๋ณด์ด์ง€ ์•Š๋Š” ์ฑ—๋ด‡ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ์˜ํ•ด ์‚ฌ์šฉ์ž์˜ ์„ฑ์ฐฐ์ด ์œ ๋„๋  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์€ ๊ธฐ์กด์˜ ์ธ๊ฐ„-์ปดํ“จํ„ฐ ์ƒํ˜ธ์ž‘์šฉ์—์„œ ๊ฐ•์กฐ๋˜๋Š” ์‚ฌ์šฉ์ž ์ œ์–ด์™€ ๋””์ž์ธ ํˆฌ๋ช…์„ฑ์—์„œ ์ „๋ณต์„ ์ดˆ๋ž˜ํ•˜๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ๋ณด์ผ ์ˆ˜ ์žˆ์œผ๋‚˜, ์ƒ์ง•์  ์ƒํ˜ธ์ž‘์šฉ(symbolic interaction)์˜ ๋งฅ๋ฝ์—์„œ ์˜คํžˆ๋ ค ์‚ฌ์šฉ์ž๊ฐ€ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ์˜ํ•ด ์ง€๋‚˜๊ฐ„ ๊ณผ๊ฑฐ์— ๋Œ€ํ•œ ์ƒˆ๋กœ์šด ์˜๋ฏธ๋ฅผ ์ ๊ทน ํƒ์ƒ‰ํ•ด๋‚˜๊ฐ€๋Š” ๊ณผ์ •์ด ๋  ์ˆ˜ ์žˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ์ด๊ฒƒ์„ ์ƒˆ๋กœ์šด ๋””์ž์ธ ๋ฉ”ํƒ€ํฌ, ์ฆ‰ ์˜๋ฏธ-๋งŒ๋“ค๊ธฐ(meaning-making)๋กœ ์ œ์•ˆํ•˜๊ณ  ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๋„›์ง€(nudge)์— ์˜ํ•œ ์‚ฌ์šฉ์ž์˜ ์ฃผ๊ด€์  ํ•ด์„ ๊ฒฝํ—˜(interpretive process)์„ ๊ฐ•์กฐํ•œ๋‹ค. ์ด๊ฒƒ์€ ํ•˜๋‚˜์˜ ์ฑ—๋ด‡ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด๋ผ ํ• ์ง€๋ผ๋„ ์„œ๋กœ ๋‹ค๋ฅธ ์‚ฌ์šฉ์ž์˜ ๋‹ค์–‘ํ•œ ์„ฑ์ฐฐ ๊ฒฝํ—˜์„ ์œ ๋„ํ•ด๋‚ผ ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•˜๋ฉฐ, ์ด๋Ÿฌํ•œ ๋งฅ๋ฝ์—์„œ ์ธ๊ณต์ง€๋Šฅ์€ ๊ธฐ์กด์˜ ๋ธ”๋ž™ ๋ฐ•์Šค๋ฅผ ์œ ์ง€ํ•˜๋ฉด์„œ๋„ ์‚ฌ์šฉ์ž์˜ ์ž์œจ์„ฑ์„ ๋ณด์žฅํ•  ์ˆ˜ ์žˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ์šฐ๋ฆฌ์™€ ํ˜‘์—…ํ•˜๋Š” ์ธ๊ณต์ง€๋Šฅ ์ฑ—๋ด‡ ๊ธฐ์ˆ ์˜ ๋””์ž์ธ์— ๋Œ€ํ•œ ๊ฒฝํ—˜์  ์ดํ•ด๋ฅผ ๋†’์ด๊ณ , ์ด๋ก ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ์ฑ—๋ด‡์„ ๊ตฌํ˜„ํ•จ์œผ๋กœ์จ ๋””์ž์ธ ์ „๋žต์— ๋Œ€ํ•œ ์‹ค์ฆ์  ๊ทผ๊ฑฐ๋ฅผ ์ œ์‹œํ•œ๋‹ค. ๋˜ํ•œ ์ž์•„ ์„ฑ์ฐฐ ๊ณผ์ •์— ๋™ํ–‰ํ•˜๋Š” ๋™๋ฐ˜์ž(companion)๋กœ์„œ์˜ ๊ธฐ์ˆ ๋กœ ์ƒˆ๋กœ์šด ๋””์ž์ธ ๋ฉ”ํƒ€ํฌ๋ฅผ ์ œ์‹œํ•จ์œผ๋กœ์จ ์ธ๊ฐ„์ปดํ“จํ„ฐ์ƒํ˜ธ์ž‘์šฉ(HCI)์˜ ์ด๋ก ์  ํ™•์žฅ์— ๊ธฐ์—ฌํ•˜๊ณ , ์‚ฌ์šฉ์ž์˜ ๋ถ€์ •์  ๊ฒฝํ—˜์— ๋Œ€ํ•œ ์˜๋ฏธ ์ถ”๊ตฌ๋ฅผ ๋•๋Š” ๊ด€๊ณ„์ง€ํ–ฅ์  ์ธ๊ณต์ง€๋Šฅ์œผ๋กœ์„œ ํ–ฅํ›„ ํ˜„๋Œ€์ธ์˜ ์ •์‹ ๊ฑด๊ฐ•์— ์ด๋ฐ”์ง€ํ•  ์ˆ˜ ์žˆ๋Š” ์‚ฌํšŒ์ , ์‚ฐ์—…์  ์˜์˜๋ฅผ ๊ฐ–๋Š”๋‹ค.CHAPTER 1. INTRODUCTION ๏ผ‘ 1.1. BACKGROUND AND MOTIVATION ๏ผ‘ 1.2. RESEARCH GOAL AND QUESTIONS ๏ผ• 1.2.1. Research Goal ๏ผ• 1.2.2. Research Questions ๏ผ• 1.3. MAJOR CONTRIBUTIONS ๏ผ˜ 1.4. THESIS OVERVIEW ๏ผ™ CHAPTER 2. LITERATURE REVIEW ๏ผ‘๏ผ‘ 2.1. THE REFLECTING SELF ๏ผ‘๏ผ‘ 2.1.1. Self-Reflection and Mental Wellbeing ๏ผ‘๏ผ‘ 2.1.2. The Self in Reflective Practice ๏ผ‘๏ผ• 2.1.3. Design Space ๏ผ’๏ผ’ 2.2. SELF-REFLECTION IN HCI ๏ผ’๏ผ– 2.2.1. Reflection Design in HCI ๏ผ’๏ผ– 2.2.2. HCI for Mental Wellbeing ๏ผ“๏ผ– 2.2.3. Design Opportunities ๏ผ”๏ผ 2.3. CONVERSATIONAL AGENT DESIGN ๏ผ”๏ผ’ 2.3.1. Theoretical Background ๏ผ”๏ผ’ 2.3.2. Technical Background ๏ผ”๏ผ— 2.3.3. Design Strategies ๏ผ”๏ผ™ 2.4. SUMMARY ๏ผ–๏ผ™ CHAPTER 3. DESIGNING CHATBOT FOR TRANSFORMATIVE REFLECTION ๏ผ—๏ผ‘ 3.1. DESIGN GOAL AND DECISIONS ๏ผ—๏ผ‘ 3.2. CHATBOT IMPLEMENTATION ๏ผ—๏ผ– 3.2.1. Emotional Intelligence ๏ผ—๏ผ– 3.2.2. Procedural Intelligence ๏ผ—๏ผ— 3.3. EXPERIMENTAL USER STUDY ๏ผ—๏ผ™ 3.3.1. Participants ๏ผ—๏ผ™ 3.3.2. Task ๏ผ˜๏ผ 3.3.3. Procedure ๏ผ˜๏ผ 3.3.4. Ethics Approval ๏ผ˜๏ผ 3.3.5. Surveys and Interview ๏ผ˜๏ผ‘ 3.4. RESULTS ๏ผ˜๏ผ’ 3.4.1. Survey Findings ๏ผ˜๏ผ’ 3.4.2. Qualitative Findings ๏ผ˜๏ผ“ 3.5. IMPLICATIONS ๏ผ˜๏ผ˜ 3.5.1. Articulating Hopes and Fears ๏ผ˜๏ผ™ 3.5.2. Designing for Guidance ๏ผ™๏ผ‘ 3.5.3. Rethinking Autonomy ๏ผ™๏ผ’ 3.6. SUMMARY ๏ผ™๏ผ” CHAPTER 4. DESIGNING CHATBOTS FOR EXPLAINING AND EXPLORING REFLECTIONS ๏ผ™๏ผ– 4.1. DESIGN GOAL AND DECISIONS ๏ผ™๏ผ– 4.1.1. Design Decisions for Basic Chat ๏ผ™๏ผ˜ 4.1.2. Design Decisions for Responsive Chat ๏ผ™๏ผ˜ 4.2. CHATBOT IMPLEMENTATION ๏ผ‘๏ผ๏ผ’ 4.2.1. Emotional Intelligence ๏ผ‘๏ผ๏ผ“ 4.2.2. Procedural Intelligence ๏ผ‘๏ผ๏ผ• 4.3. EXPERIMENTAL USER STUDY ๏ผ‘๏ผ๏ผ– 4.3.1. Participants ๏ผ‘๏ผ๏ผ– 4.3.2. Task ๏ผ‘๏ผ๏ผ— 4.3.3. Procedure ๏ผ‘๏ผ๏ผ— 4.3.4. Safeguarding of Study Participants and Ethics Approval ๏ผ‘๏ผ๏ผ˜ 4.3.5. Surveys and Interviews ๏ผ‘๏ผ๏ผ˜ 4.4. RESULTS ๏ผ‘๏ผ‘๏ผ‘ 4.4.1. Quantitative Findings ๏ผ‘๏ผ‘๏ผ‘ 4.4.2. Qualitative Findings ๏ผ‘๏ผ‘๏ผ˜ 4.5. IMPLICATIONS ๏ผ‘๏ผ’๏ผ— 4.5.1. Telling Stories to a Chatbot ๏ผ‘๏ผ’๏ผ˜ 4.5.2. Designing for Disclosure ๏ผ‘๏ผ“๏ผ 4.5.3. Rethinking Predictability and Transparency ๏ผ‘๏ผ“๏ผ’ 4.6. SUMMARY ๏ผ‘๏ผ“๏ผ“ CHAPTER 5. DESIGNING CHATBOTS FOR SELF-REFLECTION: SUPPORTING GUIDED DISCLOSURE ๏ผ‘๏ผ“๏ผ• 5.1. DESIGNING FOR GUIDED DISCLOSURE ๏ผ‘๏ผ“๏ผ™ 5.1.1. Chatbots as Virtual Confidante ๏ผ‘๏ผ“๏ผ™ 5.1.2. Routine and Variety in Interaction ๏ผ‘๏ผ”๏ผ‘ 5.1.3. Reflection as Continued Experience ๏ผ‘๏ผ”๏ผ” 5.2. TENSIONS IN DESIGN ๏ผ‘๏ผ”๏ผ• 5.2.1. Adaptivity ๏ผ‘๏ผ”๏ผ• 5.2.2. Autonomy ๏ผ‘๏ผ”๏ผ— 5.2.3. Algorithmic Affordance ๏ผ‘๏ผ”๏ผ˜ 5.3. MEANING-MAKING AS DESIGN METAPHOR ๏ผ‘๏ผ•๏ผ 5.3.1. Meaning in Reflection ๏ผ‘๏ผ•๏ผ‘ 5.3.2. Meaning-Making as Interaction ๏ผ‘๏ผ•๏ผ“ 5.3.3. Making Meanings with AI ๏ผ‘๏ผ•๏ผ• CHAPTER 6. CONCLUSION ๏ผ‘๏ผ•๏ผ˜ 6.1. RESEARCH SUMMARY ๏ผ‘๏ผ•๏ผ˜ 6.2. LIMITATIONS AND FUTURE WORK ๏ผ‘๏ผ–๏ผ‘ 6.3. FINAL REMARKS ๏ผ‘๏ผ–๏ผ“ BIBLIOGRAPHY ๏ผ‘๏ผ–๏ผ• ABSTRACT IN KOREAN ๏ผ‘๏ผ™๏ผ’Docto
    • โ€ฆ
    corecore