7,015 research outputs found

    Contextual predictability shapes signal autonomy

    Get PDF
    Abstract Aligning on a shared system of communication requires senders and receivers reach a balance between simplicity, where there is a pressure for compressed representations, and informativeness, where there is a pressure to be communicatively functional. We investigate the extent to which these two pressures are governed by contextual predictability: the amount of contextual information that a sender can estimate, and therefore exploit, in conveying their intended meaning. In particular, we test the claim that contextual predictability is causally related to signal autonomy: the degree to which a signal can be interpreted in isolation, without recourse to contextual information. Using an asymmetric communication game, where senders and receivers are assigned fixed roles, we manipulate two aspects of the referential context: (i) whether or not a sender shares access to the immediate contextual information used by the receiver in interpreting their utterance; (ii) the extent to which the relevant solution in the immediate referential context is generalisable to the aggregate set of contexts. Our results demonstrate that contextual predictability shapes the degree of signal autonomy: when the context is highly predictable (i.e., the sender has access to the context in which their utterances will be interpreted, and the semantic dimension which discriminates between meanings in context is consistent across communicative episodes), languages develop which rely heavily on the context to reduce uncertainty about the intended meaning. When the context is less predictable, senders favour systems composed of autonomous signals, where all potentially relevant semantic dimensions are explicitly encoded. Taken together, these results suggest that our pragmatic faculty, and how it integrates information from the context in reducing uncertainty, plays a central role in shaping language structure

    Context, cognition and communication in language

    Get PDF
    Questions pertaining to the unique structure and organisation of language have a long history in the field of linguistics. In recent years, researchers have explored cultural evolutionary explanations, showing how language structure emerges from weak biases amplified over repeated patterns of learning and use. One outstanding issue in these frameworks is accounting for the role of context. In particular, many linguistic phenomena are said to to be context-dependent; interpretation does not take place in a void, and requires enrichment from the current state of the conversation, the physical situation, and common knowledge about the world. Modelling the relationship between language structure and context is therefore crucial for developing a cultural evolutionary approach to language. One approach is to use statistical analyses to investigate large-scale, cross-cultural datasets. However, due to the inherent limitations of statistical analyses, especially with regards to the inadequacy of these methods to test hypotheses about causal relationships, I argue that experiments are better suited to address questions pertaining to language structure and context. From here, I present a series of artificial language experiments, with the central aim being to test how manipulations to context influence the structure and organisation of language. Experiment 1 builds upon previous work in iterated learning and communication games through demonstrating that the emergence of optimal communication systems is contingent on the contexts in which languages are learned and used. The results show that language systems gradually evolve to only encode information that is informative for conveying the intended meaning of the speaker - resulting in markedly different systems of communication. Whereas Experiment 1 focused on how context influences the emergence of structure, Experiments 2 and 3 investigate under what circumstances do manipulations to context result in the loss of structure. While the results are inconclusive across these two experiments, there is tentative evidence that manipulations to context can disrupt structure, but only when interacting with other factors. Lastly, Experiment 4 investigates whether the degree of signal autonomy (the capacity for a signal to be interpreted without recourse to contextual information) is shaped by manipulations to contextual predictability: the extent to which a speaker can estimate and exploit contextual information a hearer uses in interpreting an utterance. When the context is predictable, speakers organise languages to be less autonomous (more context-dependent) through combining linguistic signals with contextual information to reduce effort in production and minimise uncertainty in comprehension. By decreasing contextual predictability, speakers increasingly rely on strategies that promote more autonomous signals, as these signals depend less on contextual information to discriminate between possible meanings. Overall, these experiments provide proof-of-concept for investigating the relationship between language structure and context, showing that the organisational principles underpinning language are the result of competing pressures from context, cognition, and communication

    The Legitimacy of Arbitral Reasoning: On Authority and Authorisation in International Investment Dispute Settlement

    Get PDF
    The institution of investment treaty arbitration fundamentally questions traditional correlations between legitimation procedures and legitimate authority, as well as between individual, case-to case arbitration and arbitration as a highly effective international adjudicative system. It oscillates between contractual autonomy in proceedings and traits of substantive public law with regards to grounds for and merits of claims. By engaging with an emerging scholarship on procedural and ethical determinants of investor-state dispute settlement, this article explores and argues for a scholarly sensitivity towards a structural co-originality of procedural authorisation and arbitral authority. Demonstrating that responsibility and accountability are decisive and still under-theorised procedural factors relating to legitimation and legitimacy perceptions, the article concludes with a normative account of the nature of legal reasoning in investment treaty arbitration. Accentuating the intrinsic correlation of internal to external, and autonomous to instrumental procedural objectives in the craftsmanship of writing arbitral awards adds meaningfully to what has been labelled a โ€˜jurisprudence constanteโ€™ and thus identified as legitimate corpus of arbitral decisions

    From Biological to Synthetic Neurorobotics Approaches to Understanding the Structure Essential to Consciousness (Part 3)

    Get PDF
    This third paper locates the synthetic neurorobotics research reviewed in the second paper in terms of themes introduced in the first paper. It begins with biological non-reductionism as understood by Searle. It emphasizes the role of synthetic neurorobotics studies in accessing the dynamic structure essential to consciousness with a focus on system criticality and self, develops a distinction between simulated and formal consciousness based on this emphasis, reviews Tani and colleagues' work in light of this distinction, and ends by forecasting the increasing importance of synthetic neurorobotics studies for cognitive science and philosophy of mind going forward, finally in regards to most- and myth-consciousness

    Adults are more efficient in creating and transmitting novel signalling systems than children

    Get PDF
    Iterated language learning experiments have shown that meaningful and structured signalling systems emerge when there is pressure for signals to be both learnable and expressive. Yet such experiments have mainly been conducted with adults using language-like signals. Here we explore whether structured signalling systems can also emerge when signalling domains are unfamiliar and when the learners are children with their well-attested cognitive and pragmatic limitations. In Experiment 1, we compared iterated learning of binary auditory sequences denoting small sets of meanings in chains of adults and 5-7-year old children. Signalling systems became more learnable even though iconicity and structure did not emerge despite applying a homonymy filter designed to keep the systems expressive. When the same types of signals were used in referential communication by adult and child dyads in Experiment 2, only the adults, but not the children, were able to negotiate shared iconic and structured signals. Referential communication using their native language by 4-5-year old children in Experiment 3 showed that only interaction with adults, but not with peers resulted in informative expressions. These findings suggest that emergence and transmission of communication systems is unlikely to be driven by children, and point to the importance of cognitive maturity and pragmatic expertise of learners as well as feedback-based scaffolding of communicative effectiveness by experts during language evolution

    Working outside the comfort of competence in a corrections centre: toward collective competence

    Full text link
    Qualitative case study of the collective learning of staff working in a corrections centre under conditions of rapid organisational change. Conceptualises the notion of collective competence that is emergent and relationally constructed

    A healthy office and healthy employees: a longitudinal case study with a salutogenic perspective in the context of the physical office environment

    Get PDF
    This two-wave study (time lag of six months and two years post-relocation) investigated ways in which employeesโ€™ perceptions of the office environment relate to their perceived health in the long term, drawing on the salutogenic approach to health and the sense of coherence theory (comprehensibility, manageability, and meaningfulness). A mixed-method approach was adopted. The data collection involved semi-structured interviews with employees, plus structured observations. The findings indicate that employees found the office environment less comprehensible and meaningful in Wave 2, while (somewhat) equally manageable. Comprehensibility was influenced by a lack of clear behavioural rules; manageability was influenced by a lack of control over the environment; and meaningfulness was influenced by social environment and lack of personalization. The contextual aspects of the office, including tasks, flexible working culture and the change processes were critical to these findings. This study has demonstrated that negative influences caused by poor design choices do not resolve themselves over time. When there is limited support for one component of sense of coherence, the initial observed benefits wear off and negative influences may spill over into other components. Therefore, office design should be approached with balanced attention to comprehensibility, manageability, and meaningfulness

    ์ •์‹ ๊ฑด๊ฐ•์—์„œ ์‚ฌ์šฉ์ž ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ์™€ ์ž์•„์„ฑ์ฐฐ์„ ์ง€์›ํ•˜๋Š” ๋Œ€ํ™”ํ˜• ์—์ด์ „ํŠธ ๋””์ž์ธ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์œตํ•ฉ๊ณผํ•™๊ธฐ์ˆ ๋Œ€ํ•™์› ์œตํ•ฉ๊ณผํ•™๋ถ€(๋””์ง€ํ„ธ์ •๋ณด์œตํ•ฉ์ „๊ณต), 2020. 8. ์„œ๋ด‰์›.In the advent of artificial intelligence (AI), we are surrounded by technological gadgets, devices and intelligent personal assistant (IPAs) that voluntarily take care of our home, work and social networks. They help us manage our life for the better, or at least that is what they are designed for. As a matter of fact, few are, however, designed to help us grapple with the thoughts and feelings that often construct our living. In other words, technologies hardly help us think. How can they be designed to help us reflect on ourselves for the better? In the simplest terms, self-reflection refers to thinking deeply about oneself. When we think deeply about ourselves, there can be both positive and negative consequences. On the one hand, reflecting on ourselves can lead to a better self-understanding, helping us achieve life goals. On the other hand, we may fall into brooding and depression. The sad news is that the two are usually intertwined. The problem, then, is the irony that reflecting on oneself by oneself is not easy. To tackle this problem, this work aims to design technology in the form of a conversational agent, or a chatbot, to encourage a positive self-reflection. Chatbots are natural language interfaces that interact with users in text. They work at the tip of our hands as if SMS or instant messaging, from flight reservation and online shopping to news service and healthcare. There are even chatbot therapists offering psychotherapy on mobile. That machines can now talk to us creates an opportunity for designing a natural interaction that used to be humans own. This work constructs a two-dimensional design space for translating self-reflection into a human-chatbot interaction, with user self-disclosure and chatbot guidance. Users confess their thoughts and feelings to the bot, and the bot is to guide them in the scaffolding process. Previous work has established an extensive line of research on the therapeutic effect of emotional disclosure. In HCI, reflection design has posited the need for guidance, e.g. scaffolding users thoughts, rather than assuming their ability to reflect in a constructive manner. The design space illustrates different reflection processes depending on the levels of user disclosure and bot guidance. Existing reflection technologies have most commonly provided minimal levels of disclosure and guidance, and healthcare technologies the opposite. It is the aim of this work to investigate the less explored space by designing chatbots called Bonobot and Diarybot. Bonobot differentiates itself from other bot interventions in that it only motivates the idea of change rather than direct engagement. Diarybot is designed in two chat versions, Basic and Responsive, which create novel interactions for reflecting on a difficult life experience by explaining it to and exploring it with a chatbot. These chatbots are set up for a user study with 30 participants, to investigate the user experiences of and responses to design strategies. Based on the findings, challenges and opportunities from designing for chatbot-guided reflection are explored. The findings of this study are as follows. First, participants preferred Bonobots questions that prompted the idea of change. Its responses were also appreciated, but only when they conveyed accurate empathy. Thus questions, coupled with empathetic responses, could serve as a catalyst for disclosure and even a possible change of behavior, a motivational boost. Yet the chatbot-led interaction led to surged user expectations for the bot. Participants demanded more than just the guidance, such as solutions and even superhuman intelligence. Potential tradeoff between user engagement and autonomy in designing human-AI partnership is discussed. Unlike Bonobot, Diarybot was designed with less guidance to encourage users own narrative making. In both Diarybot chats, the presence of a bot could make it easier for participants to share the most difficult life experiences, compared to a no-chatbot writing condition. Yet an increased interaction with the bot in Responsive chat could lead to a better user engagement. On the contrary, more emotional expressiveness and ease of writing were observed with little interaction in Basic chat. Coupled with qualitative findings that reveal user preference for varied interactions and tendency to adapt to bot patterns, predictability and transparency of designing chatbot interaction are discussed in terms of managing user expectations in human-AI interaction. In sum, the findings of this study shed light on designing human-AI interaction. Chatbots can be a potential means of supporting guided disclosure on lifes most difficult experiences. Yet the interaction between a machine algorithm and an innate human cognition bears interesting questions for the HCI community, especially in terms of user autonomy, interface predictability, and design transparency. Discussing the notion of algorithmic affordances in AI agents, this work proposes meaning-making as novel interaction design metaphor: In the symbolic interaction via language, AI nudges users, which inspires and engages users in their pursuit of making sense of lifes agony. Not only does this metaphor respect user autonomy but also it maintains the veiled workings of AI from users for continued engagement. This work makes the following contributions. First, it designed and implemented chatbots that can provide guidance to encourage user narratives in self-reflection. Next, it offers empirical evidence on chatbot-guided disclosure and discusses implications for tensions and challenges in design. Finally, this work proposes meaning-making as a novel design metaphor. It calls for the responsible design of intelligent interfaces for positive reflection in pursuit of psychological wellbeing, highlighting algorithmic affordances and interpretive process of human-AI interaction.์ตœ๊ทผ ์ธ๊ณต์ง€๋Šฅ(Artificial Intelligence; AI) ๊ธฐ์ˆ ์€ ์šฐ๋ฆฌ ์‚ถ์˜ ๋ฉด๋ฉด์„ ๋งค์šฐ ๋น ๋ฅด๊ฒŒ ๋ฐ”๊ฟ”๋†“๊ณ  ์žˆ๋‹ค. ํŠนํžˆ ์• ํ”Œ์˜ ์‹œ๋ฆฌ(Siri)์™€ ๊ตฌ๊ธ€ ์–ด์‹œ์Šคํ„ดํŠธ (Google Assistant) ๋“ฑ ์ž์—ฐ์–ด ์ธํ„ฐํŽ˜์ด์Šค(natural language interfaces)์˜ ํ™•์žฅ์€ ๊ณง ์ธ๊ณต์ง€๋Šฅ ์—์ด์ „ํŠธ์™€์˜ ๋Œ€ํ™”๊ฐ€ ์ธํ„ฐ๋ž™์…˜์˜ ์ฃผ์š” ์ˆ˜๋‹จ์ด ๋  ๊ฒƒ์ž„์„ ๋Šฅํžˆ ์ง์ž‘์ผ€ ํ•œ๋‹ค. ์‹ค์ƒ ์ธ๊ณต์ง€๋Šฅ ์—์ด์ „ํŠธ๋Š” ์‹ค์ƒํ™œ์—์„œ ์ฝ˜ํ…์ธ  ์ถ”์ฒœ๊ณผ ์˜จ๋ผ์ธ ์‡ผํ•‘ ๋“ฑ ๋‹ค์–‘ํ•œ ์„œ๋น„์Šค๋ฅผ ์ œ๊ณตํ•˜๊ณ  ์žˆ์ง€๋งŒ, ์ด๋“ค์˜ ๋Œ€๋ถ€๋ถ„์€ ๊ณผ์—…-์ง€ํ–ฅ์ ์ด๋‹ค. ์ฆ‰ ์ธ๊ณต์ง€๋Šฅ์€ ์šฐ๋ฆฌ์˜ ์‚ถ์„ ํŽธ๋ฆฌํ•˜๊ฒŒ ํ•˜์ง€๋งŒ, ๊ณผ์—ฐ ํŽธ์•ˆํ•˜๊ฒŒ ํ•  ์ˆ˜ ์žˆ๋Š”๊ฐ€? ๋ณธ ์—ฐ๊ตฌ๋Š” ํŽธํ•˜์ง€๋งŒ ํŽธํ•˜์ง€ ์•Š์€ ํ˜„๋Œ€์ธ์„ ์œ„ํ•œ ๊ธฐ์ˆ ์˜ ์—ญํ• ์„ ๊ณ ๋ฏผํ•˜๋Š” ๋ฐ์—์„œ ์ถœ๋ฐœํ•œ๋‹ค. ์ž์•„์„ฑ์ฐฐ(self-reflection), ์ฆ‰ ์ž์‹ ์— ๋Œ€ํ•ด ๊นŠ์ด ์ƒ๊ฐํ•ด ๋ณด๋Š” ํ™œ๋™์€ ์ž๊ธฐ์ธ์‹๊ณผ ์ž๊ธฐ์ดํ•ด๋ฅผ ๋„๋ชจํ•˜๊ณ  ๋ฐฐ์›€๊ณผ ๋ชฉํ‘œ์˜์‹์„ ๊ณ ์ทจํ•˜๋Š” ๋“ฑ ๋ถ„์•ผ๋ฅผ ๋ง‰๋ก ํ•˜๊ณ  ๋„๋ฆฌ ์—ฐ๊ตฌ ๋ฐ ์ ์šฉ๋˜์–ด ์™”๋‹ค. ํ•˜์ง€๋งŒ ์ž์•„์„ฑ์ฐฐ์˜ ๊ฐ€์žฅ ํฐ ์–ด๋ ค์›€์€ ์Šค์Šค๋กœ ๊ฑด์„ค์ ์ธ ์„ฑ์ฐฐ์„ ๋„๋ชจํ•˜๊ธฐ ํž˜๋“ค๋‹ค๋Š” ๊ฒƒ์ด๋‹ค. ํŠนํžˆ, ๋ถ€์ •์ ์ธ ๊ฐ์ •์  ๊ฒฝํ—˜์— ๋Œ€ํ•œ ์ž์•„์„ฑ์ฐฐ์€ ์ข…์ข… ์šฐ์šธ๊ฐ๊ณผ ๋ถˆ์•ˆ์„ ๋™๋ฐ˜ํ•œ๋‹ค. ๊ทน๋ณต์ด ํž˜๋“  ๊ฒฝ์šฐ ์ƒ๋‹ด ๋˜๋Š” ์น˜๋ฃŒ๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์ง€๋งŒ, ์‚ฌํšŒ์  ๋‚™์ธ๊ณผ ์žฃ๋Œ€์˜ ๋ถ€๋‹ด๊ฐ์œผ๋กœ ๊บผ๋ ค์ง€๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋‹ค์ˆ˜์ด๋‹ค. ์„ฑ์ฐฐ ๋””์ž์ธ(Reflection Design)์€ ์ธ๊ฐ„-์ปดํ“จํ„ฐ์ƒํ˜ธ์ž‘์šฉ(HCI)์˜ ์˜ค๋žœ ํ™”๋‘๋กœ, ๊ทธ๋™์•ˆ ํšจ๊ณผ์ ์ธ ์„ฑ์ฐฐ์„ ๋„์šธ ์ˆ˜ ์žˆ๋Š” ๋””์ž์ธ ์ „๋žต๋“ค์ด ๋‹ค์ˆ˜ ์—ฐ๊ตฌ๋˜์–ด ์™”์ง€๋งŒ ๋Œ€๋ถ€๋ถ„ ๋‹ค์–‘ํ•œ ์‚ฌ์šฉ์ž ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘ ์ „๋žต์„ ํ†ตํ•ด ๊ณผ๊ฑฐ ํšŒ์ƒ ๋ฐ ํ•ด์„์„ ๋•๋Š” ๋ฐ ๊ทธ์ณค๋‹ค. ์ตœ๊ทผ ์†Œ์œ„ ์ฑ—๋ด‡ ์ƒ๋‹ด์‚ฌ๊ฐ€ ๋“ฑ์žฅํ•˜์—ฌ ์‹ฌ๋ฆฌ์ƒ๋‹ด๊ณผ ์น˜๋ฃŒ ๋ถ„์•ผ์— ์ ์šฉ๋˜๊ณ  ์žˆ์ง€๋งŒ, ์ด ๋˜ํ•œ ์„ฑ์ฐฐ์„ ๋•๊ธฐ๋ณด๋‹ค๋Š” ํšจ์œจ์ ์ธ ์ฒ˜์น˜ ๋„๊ตฌ์— ๋จธ๋ฌด๋ฅด๊ณ  ์žˆ์„ ๋ฟ์ด๋‹ค. ์ฆ‰ ๊ธฐ์ˆ ์€ ์น˜๋ฃŒ ์ˆ˜๋‹จ์ด๊ฑฐ๋‚˜ ์„ฑ์ฐฐ์˜ ๋Œ€์ƒ์ด ๋˜์ง€๋งŒ, ๊ทธ ๊ณผ์ •์— ๊ฐœ์ž…ํ•˜๋Š” ๊ฒฝ์šฐ๋Š” ์ œํ•œ์ ์ด๋ผ๊ณ  ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด์— ๋ณธ ์—ฐ๊ตฌ๋Š” ์„ฑ์ฐฐ ๋™๋ฐ˜์ž๋กœ์„œ ๋Œ€ํ™”ํ˜• ์—์ด์ „ํŠธ์ธ ์ฑ—๋ด‡์„ ๋””์ž์ธํ•  ๊ฒƒ์„ ์ œ์•ˆํ•œ๋‹ค. ์ด ์ฑ—๋ด‡์˜ ์—ญํ• ์€ ์‚ฌ์šฉ์ž์˜ ๋ถ€์ •์ ์ธ ๊ฐ์ •์  ๊ฒฝํ—˜ ๋˜๋Š” ํŠธ๋ผ์šฐ๋งˆ์— ๋Œ€ํ•ด ์ด์•ผ๊ธฐํ•  ์ˆ˜ ์žˆ๋„๋ก ๋„์šธ ๋ฟ ์•„๋‹ˆ๋ผ, ๊ทธ ๊ณผ์ •์—์„œ ๋ฐ˜์ถ”๋ฅผ ํ†ต์ œํ•˜์—ฌ ๊ฑด์„ค์ ์ธ ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ๋ฅผ ์ด๋Œ์–ด ๋‚ด๋Š” ๊ฐ€์ด๋“œ๋ฅผ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ์ด๋Ÿฌํ•œ ์ฑ—๋ด‡์„ ์„ค๊ณ„ํ•˜๊ธฐ ์œ„ํ•ด, ์„ ํ–‰ ์—ฐ๊ตฌ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์‚ฌ์šฉ์ž์˜ ์ž๊ธฐ๋…ธ์ถœ(user self-disclosure)๊ณผ ์ฑ—๋ด‡ ๊ฐ€์ด๋“œ(guidance)๋ฅผ ๋‘ ์ถ•์œผ๋กœ ํ•œ ๋””์ž์ธ ๊ณต๊ฐ„(design space)์„ ์ •์˜ํ•˜์˜€๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ž๊ธฐ๋…ธ์ถœ๊ณผ ๊ฐ€์ด๋“œ์˜ ์ •๋„์— ๋”ฐ๋ฅธ ๋„ค ๊ฐ€์ง€ ์ž์•„์„ฑ์ฐฐ ๊ฒฝํ—˜์„ ๋ถ„๋ฅ˜ํ•˜์˜€๋‹ค: ์ž๊ธฐ๋…ธ์ถœ๊ณผ ๊ฐ€์ด๋“œ๊ฐ€ ์ตœ์†Œํ™”๋œ ํšŒ์ƒ ๊ณต๊ฐ„, ์ž๊ธฐ๋…ธ์ถœ์ด ์œ„์ฃผ์ด๊ณ  ๊ฐ€์ด๋“œ๊ฐ€ ์ตœ์†Œํ™”๋œ ์„ค๋ช… ๊ณต๊ฐ„, ์ž๊ธฐ๋…ธ์ถœ๊ณผ ์ฑ—๋ด‡์ด ์ด๋„๋Š” ๊ฐ€์ด๋“œ๊ฐ€ ํ˜ผํ•ฉ๋œ ํƒ์ƒ‰ ๊ณต๊ฐ„, ๊ฐ€์ด๋“œ๋ฅผ ์ ๊ทน ๊ฐœ์ž…์‹œ์ผœ ์ž๊ธฐ๋…ธ์ถœ์„ ๋†’์ด๋Š” ๋ณ€ํ™” ๊ณต๊ฐ„์ด ๊ทธ๊ฒƒ์ด๋‹ค. ๋ณธ ์—ฐ๊ตฌ์˜ ๋ชฉํ‘œ๋Š” ์ƒ์ˆ ๋œ ๋””์ž์ธ ๊ณต๊ฐ„์—์„œ์˜ ์„ฑ์ฐฐ ๊ฒฝํ—˜๊ณผ ๊ณผ์ •์„ ๋•๋Š” ์ฑ—๋ด‡์„ ๊ตฌํ˜„ํ•˜๊ณ , ์‚ฌ์šฉ์ž ์‹คํ—˜์„ ํ†ตํ•ด ์„ฑ์ฐฐ ๊ฒฝํ—˜๊ณผ ๋””์ž์ธ ์ „๋žต์— ๋Œ€ํ•œ ๋ฐ˜์‘์„ ์ˆ˜์ง‘ ๋ฐ ๋ถ„์„ํ•จ์œผ๋กœ์จ ์ฑ—๋ด‡ ๊ธฐ๋ฐ˜์˜ ์ž์•„ ์„ฑ์ฐฐ ์ธํ„ฐ๋ž™์…˜์„ ์ƒˆ๋กญ๊ฒŒ ์ œ์‹œํ•˜๊ณ  ์ด์— ๋Œ€ํ•œ ์‹ค์ฆ์  ๊ทผ๊ฑฐ๋ฅผ ๋งˆ๋ จํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ํ˜„์žฌ๊นŒ์ง€ ๋งŽ์€ ์„ฑ์ฐฐ ๊ธฐ์ˆ ์€ ํšŒ์ƒ์— ์ง‘์ค‘๋˜์–ด ์žˆ๊ธฐ์—, ๋‚˜๋จธ์ง€ ์„ธ ๊ณต๊ฐ„์—์„œ์˜ ์„ฑ์ฐฐ์„ ์ง€์›ํ•˜๋Š” ๋ณด๋…ธ๋ด‡๊ณผ ๊ธฐ๋ณธํ˜•๋ฐ˜์‘ํ˜• ์ผ๊ธฐ๋ด‡์„ ๋””์ž์ธํ•˜์˜€๋‹ค. ๋˜ํ•œ, ์‚ฌ์šฉ์ž ํ‰๊ฐ€๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ๋„์ถœํ•œ ์—ฐ๊ตฌ๊ฒฐ๊ณผ๋ฅผ ํ†ตํ•ด ๋„๋ž˜ํ•œ ์ธ๊ฐ„-์ธ๊ณต์ง€๋Šฅ ์ƒํ˜ธ์ž‘์šฉ(human-AI interaction)์˜ ๋งฅ๋ฝ์—์„œ ์„ฑ์ฐฐ ๋™๋ฐ˜์ž๋กœ์„œ์˜ ์ฑ—๋ด‡ ๊ธฐ์ˆ ์ด ๊ฐ–๋Š” ์˜๋ฏธ์™€ ์—ญํ• ์„ ํƒ๊ตฌํ•œ๋‹ค. ๋ณด๋…ธ๋ด‡๊ณผ ์ผ๊ธฐ๋ด‡์€ ์ธ๊ฐ„์ค‘์‹ฌ์ƒ๋‹ด๊ณผ ๋Œ€ํ™”๋ถ„์„์˜ ์ด๋ก ์  ๊ทผ๊ฑฐ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ํ•œ ์ •์„œ์ง€๋Šฅ(emotional intelligence)๊ณผ ์ ˆ์ฐจ์ง€๋Šฅ(proecedural intelligence)์„ ํ•ต์‹ฌ ์ถ•์œผ๋กœ, ๋Œ€ํ™” ํ๋ฆ„ ์ œ์–ด(flow manager)์™€ ๋ฐœํ™” ์ƒ์„ฑ(response generator)์„ ํ•ต์‹ฌ ๋ชจ๋“ˆ๋กœ ๊ตฌํ˜„ํ•˜์˜€๋‹ค. ๋จผ์ €, ๋ณด๋…ธ๋ด‡์€ ๋™๊ธฐ๊ฐ•ํ™”์ƒ๋‹ด(motivational interviewing)์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ณ ๋ฏผ๊ณผ ์ŠคํŠธ๋ ˆ์Šค์— ๋Œ€ํ•œ ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ๋ฅผ ์ด๋Œ์–ด๋‚ด์–ด, ์ด์— ๋Œ€ํ•œ ํ•ด๊ฒฐ์„ ์œ„ํ•œ ๊ฐ€์ด๋“œ ์งˆ๋ฌธ์„ ํ†ตํ•ด ๋ณ€ํ™”๋ฅผ ์œ„ํ•œ ์„ฑ์ฐฐ์„ ๋•๋Š”๋‹ค. ์ฑ—๋ด‡์˜ ๊ตฌํ˜„์„ ์œ„ํ•ด, ๋™๊ธฐ๊ฐ•ํ™”์ƒ๋‹ด์˜ ๋„ค ๋‹จ๊ณ„ ๋Œ€ํ™”๋ฅผ ์„ค์ •ํ•˜๊ณ  ๊ฐ ๋‹จ๊ณ„๋ฅผ ๊ตฌ์„ฑํ•  ์ˆ˜ ์žˆ๋Š” ์ƒ๋‹ด์‚ฌ ๋ฐœํ™” ํ–‰๋™์„ ๊ด€๋ จ๋ฌธํ—Œ์—์„œ ์ˆ˜์ง‘ ๋ฐ ์ „์ฒ˜๋ฆฌ ๊ณผ์ •์„ ๊ฑฐ์ณ ์Šคํฌ๋ฆฝํŠธํ™”ํ•˜์˜€๋‹ค. ๋˜ํ•œ, ์‚ฌ์ „ ์ „์ฒ˜๋ฆฌ๋œ ๋ฌธ์žฅ์ด ๋งฅ๋ฝ์„ ์œ ์ง€ํ•  ์ˆ˜ ์žˆ๋Š” ๋Œ€ํ™”์— ์“ฐ์ผ ์ˆ˜ ์žˆ๋„๋ก, ๋Œ€ํ™”์˜ ์ฃผ์ œ๋Š” ๋Œ€ํ•™์›์ƒ์˜ ์–ด๋ ค์›€์œผ๋กœ ํ•œ์ •ํ•˜์˜€๋‹ค. ๋ณด๋…ธ๋ด‡๊ณผ์˜ ๋Œ€ํ™”๊ฐ€ ์‚ฌ์šฉ์ž์˜ ์„ฑ์ฐฐ์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ๊ณผ ์ด์— ๋Œ€ํ•œ ์ธ์‹์„ ํƒ์ƒ‰ํ•˜๊ธฐ ์œ„ํ•ด ์งˆ์  ์—ฐ๊ตฌ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ 30๋ช…์˜ ๋Œ€ํ•™์›์ƒ๊ณผ ์‚ฌ์šฉ์ž ์‹คํ—˜์„ ์ง„ํ–‰ํ•˜์˜€๋‹ค. ์‹คํ—˜๊ฒฐ๊ณผ, ์‚ฌ์šฉ์ž๋Š” ๋ณ€ํ™” ๋Œ€ํ™”๋ฅผ ์œ ๋„ํ•  ์ˆ˜ ์žˆ๋Š” ๋‹ค์–‘ํ•œ ํƒ์ƒ‰ ์งˆ๋ฌธ์„ ์„ ํ˜ธํ•˜์˜€๋‹ค. ๋˜ํ•œ, ์‚ฌ์šฉ์ž์˜ ๋งฅ๋ฝ์— ์ •ํ™•ํžˆ ๋“ค์–ด๋งž๋Š” ์งˆ๋ฌธ๊ณผ ํ”ผ๋“œ๋ฐฑ์€ ์‚ฌ์šฉ์ž๋ฅผ ๋”์šฑ ์ ๊ทน์ ์ธ ์ž๊ธฐ ๋…ธ์ถœ๋กœ ์ด๋Œ๊ฒŒ ํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ฐœ๊ฒฌํ•˜์˜€๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ฑ—๋ด‡์ด ๋งˆ์น˜ ์ƒ๋‹ด์‚ฌ์ฒ˜๋Ÿผ ๋Œ€ํ™”๋ฅผ ์ด๋Œ์–ด๊ฐˆ ๊ฒฝ์šฐ, ๋†’์•„์ง„ ์‚ฌ์šฉ์ž์˜ ๊ธฐ๋Œ€ ์ˆ˜์ค€์œผ๋กœ ์ธํ•ด ์ผ๋ถ€ ์‚ฌ์šฉ์ž๊ฐ€ ๋ณ€ํ™”์— ๋Œ€ํ•œ ๋™๊ธฐ๋ฅผ ํ‘œ์ถœํ•˜์˜€์Œ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ  ๋ณ€ํ™”์— ๋Œ€ํ•œ ์ž์œจ์„ฑ์„ ์ฑ—๋ด‡์— ์–‘๋„ํ•˜๋ ค๋Š” ๋ชจ์Šต ๋˜ํ•œ ๋‚˜ํƒ€๋‚จ์„ ๋ถ„์„ํ•˜์˜€๋‹ค. ๋ณด๋…ธ๋ด‡ ์—ฐ๊ตฌ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ์ผ๊ธฐ๋ด‡์€ ์ฑ—๋ด‡ ๋Œ€์‹  ์‚ฌ์šฉ์ž๊ฐ€ ๋ณด๋‹ค ์ ๊ทน์ ์œผ๋กœ ์„ฑ์ฐฐ ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ๋ฅผ ์ „๊ฐœํ•  ์ˆ˜ ์žˆ๋„๋ก ๋””์ž์ธํ•˜์˜€๋‹ค. ์ผ๊ธฐ๋ด‡์€ ํŠธ๋ผ์šฐ๋งˆ์— ๋Œ€ํ•œ ํ‘œํ˜„์  ๊ธ€์“ฐ๊ธฐ๋ฅผ ์ง€์›ํ•˜๋Š” ์ฑ—๋ด‡์œผ๋กœ, ๊ธฐ๋ณธํ˜• ๋˜๋Š” ๋ฐ˜์‘ํ˜• ๋Œ€ํ™”๋ฅผ ์ œ๊ณตํ•œ๋‹ค. ๊ธฐ๋ณธํ˜• ๋Œ€ํ™”๋Š” ํŠธ๋ผ์šฐ๋งˆ์— ๋Œ€ํ•ด ์ž์œ ๋กญ๊ฒŒ ์„ค๋ช…ํ•  ์ˆ˜ ์žˆ๋Š” ๋Œ€ํ™” ํ™˜๊ฒฝ์„ ์ œ๊ณตํ•˜๊ณ , ๋ฐ˜์‘ํ˜• ๋Œ€ํ™”๋Š” ์‚ฌ์šฉ์ž๊ฐ€ ์ž‘์„ฑํ•œ ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ์— ๋Œ€ํ•œ ํ›„์† ์ธํ„ฐ๋ž™์…˜์„ ํ†ตํ•ด ๊ณผ๊ฑฐ์˜ ๊ฒฝํ—˜์„ ์žฌํƒ์ƒ‰ํ•˜๋„๋ก ํ•˜์˜€๋‹ค. ๋˜ํ•œ, ํ›„์† ์ธํ„ฐ๋ž™์…˜์˜ ๋ฐœํ™” ํ–‰๋™์€ ๋‹ค์–‘ํ•œ ์ƒ๋‹ด์น˜๋ฃŒ์—์„œ ๋ฐœ์ทŒํ•˜๋˜ ์œ ์ €์˜ ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ์—์„œ ์ถ”์ถœํ•œ ๊ฐ์ •์–ด ๋ฐ ์ธ๊ฐ„๊ด€๊ณ„ ํ‚ค์›Œ๋“œ๋ฅผ ํ™œ์šฉํ•˜๋„๋ก ํ•˜์˜€๋‹ค. ๊ฐ ์ผ๊ธฐ๋ด‡์— ๋Œ€ํ•œ ๋ฐ˜์‘์„ ๋น„๊ต ๋ถ„์„ํ•˜๊ธฐ ์œ„ํ•ด, ์ฑ—๋ด‡ ์—†์ด ๋„ํ๋จผํŠธ์— ํ‘œํ˜„์  ๊ธ€์“ฐ๊ธฐ ํ™œ๋™๋งŒ์„ ํ•˜๋Š” ๋Œ€์กฐ๊ตฐ์„ ์„ค์ •ํ•˜๊ณ  30๋ช…์˜ ์‚ฌ์šฉ์ž๋ฅผ ๋ชจ์ง‘ํ•˜์—ฌ ๊ฐ ์กฐ๊ฑด์— ๋žœ๋ค์œผ๋กœ ๋ฐฐ์ •, ์„ค๋ฌธ๊ณผ ๋ฉด๋‹ด์„ ๋™๋ฐ˜ํ•œ 4์ผ๊ฐ„์˜ ๊ธ€์“ฐ๊ธฐ ์‹คํ—˜์„ ์ง„ํ–‰ํ•˜์˜€๋‹ค. ์‹คํ—˜๊ฒฐ๊ณผ, ์‚ฌ์šฉ์ž๋Š” ์ผ๊ธฐ๋ด‡๊ณผ์˜ ์ธํ„ฐ๋ž™์…˜์„ ํ†ตํ•ด ๋ณด์ด์ง€ ์•Š๋Š” ๊ฐ€์ƒ์˜ ์ฒญ์ž๋ฅผ ์ƒ์ƒํ•จ์œผ๋กœ์จ ๊ธ€์“ฐ๊ธฐ๋ฅผ ๋Œ€ํ™” ํ™œ๋™์œผ๋กœ ์ธ์ง€ํ•˜๊ณ  ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ํŠนํžˆ, ๋ฐ˜์‘ํ˜• ๋Œ€ํ™”์˜ ํ›„์† ์งˆ๋ฌธ๋“ค์€ ์‚ฌ์šฉ์ž๋กœ ํ•˜์—ฌ๊ธˆ ์ƒํ™ฉ์„ ๊ฐ๊ด€ํ™”ํ•˜๊ณ  ์ƒˆ๋กœ์šด ๊ด€์ ์œผ๋กœ ์ƒ๊ฐํ•ด ๋ณผ ์ˆ˜ ์žˆ๋Š” ํšจ๊ณผ๋ฅผ ๊ฑฐ๋‘์—ˆ๋‹ค. ๋ฐ˜์‘ํ˜• ๋Œ€ํ™”์—์„œ ํ›„์† ์ธํ„ฐ๋ž™์…˜์„ ๊ฒฝํ—˜ํ•œ ์‚ฌ์šฉ์ž๋Š” ์ผ๊ธฐ๋ด‡์˜ ์ธ์ง€๋œ ์ฆ๊ฑฐ์›€๊ณผ ์‚ฌํšŒ์„ฑ, ์‹ ๋ขฐ๋„์™€ ์žฌ์‚ฌ์šฉ ์˜ํ–ฅ์— ๋Œ€ํ•œ ํ‰๊ฐ€๊ฐ€ ๋‹ค๋ฅธ ๋‘ ์กฐ๊ฑด์—์„œ๋ณด๋‹ค ์œ ์˜ํ•˜๊ฒŒ ๋†’์•˜๋‹ค. ๋ฐ˜๋ฉด, ๊ธฐ๋ณธํ˜• ๋Œ€ํ™” ์ฐธ์—ฌ์ž๋Š” ๋‹ค๋ฅธ ๋‘ ์กฐ๊ฑด์—์„œ๋ณด๋‹ค ๊ฐ์ •์  ํ‘œํ˜„์˜ ์šฉ์ด์„ฑ๊ณผ ๊ธ€์“ฐ๊ธฐ์˜ ์–ด๋ ค์›€์„ ๊ฐ๊ฐ ์œ ์˜ํ•˜๊ฒŒ ๋†’๊ฒŒ, ๊ทธ๋ฆฌ๊ณ  ๋‚ฎ๊ฒŒ ํ‰๊ฐ€ํ•˜์˜€๋‹ค. ์ฆ‰, ์ฑ—๋ด‡์€ ๋งŽ์€ ์ธํ„ฐ๋ž™์…˜ ์—†์ด๋„ ์ฒญ์ž์˜ ์—ญํ• ์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์—ˆ์ง€๋งŒ, ํ›„์† ์งˆ๋ฌธ์„ ํ†ตํ•œ ์ธํ„ฐ๋ž™์…˜์ด ๊ฐ€๋Šฅํ–ˆ๋˜ ๋ฐ˜์‘ํ˜• ๋Œ€ํ™”๋Š” ๋”์šฑ ์ ๊ทน์ ์ธ ์œ ์ € ์ฐธ์—ฌ(engagement)๋ฅผ ์ด๋Œ์–ด๋‚ผ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋˜ํ•œ, ์‹คํ—˜์ด ์ง„ํ–‰๋จ์— ๋”ฐ๋ผ, ์‚ฌ์šฉ์ž๊ฐ€ ๋ฐ˜์‘ํ˜• ์ผ๊ธฐ๋ด‡์˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ์ž์‹ ์˜ ๊ธ€์“ฐ๊ธฐ ์ฃผ์ œ์™€ ๋‹จ์–ด ์„ ํƒ ๋“ฑ์„ ๋งž๊ฒŒ ๋ฐ”๊พธ์–ด ๊ฐ€๋Š” ์ ์‘์ (adaptive) ํ–‰๋™์ด ๊ด€์ฐฐ๋˜์—ˆ๋‹ค. ์•ž์„  ์—ฐ๊ตฌ๊ฒฐ๊ณผ๋ฅผ ํ†ตํ•ด, ๋‹ค์–‘ํ•œ ์ฑ—๋ด‡ ๋””์ž์ธ ์ „๋žต์„ ๋ฐ”ํƒ•์œผ๋กœ ์‚ฌ์šฉ์ž์˜ ๋‚ด๋Ÿฌํ‹ฐ๋ธŒ๊ฐ€ ๋‹ค๋ฅด๊ฒŒ ์œ ๋„๋  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๋”ฐ๋ผ์„œ ์„œ๋กœ ๋‹ค๋ฅธ ์œ ํ˜•์˜ ์„ฑ์ฐฐ ๊ฒฝํ—˜์„ ์ด๋Œ์–ด๋‚ผ ์ˆ˜ ์žˆ์Œ์„ ๋ฐœ๊ฒฌํ•˜์˜€๋‹ค. ๋˜ํ•œ, ์ž์œจ์ ์ธ ํ–‰์œ„์ธ ์ž์•„์„ฑ์ฐฐ์ด ๊ธฐ์ˆ ๊ณผ์˜ ์ƒํ˜ธ์ž‘์šฉ์œผ๋กœ ํ˜ธํ˜œ์  ์„ฑ์งˆ์„ ๊ฐ–๊ฒŒ ๋  ๋•Œ ์‚ฌ์šฉ์ž์˜ ์ž์œจ์„ฑ, ์ƒํ˜ธ์ž‘์šฉ์˜ ์˜ˆ์ธก๊ฐ€๋Šฅ์„ฑ๊ณผ ๋””์ž์ธ ํˆฌ๋ช…์„ฑ์—์„œ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐˆ๋“ฑ๊ด€๊ณ„(tensions)๋ฅผ ํƒ์ƒ‰ํ•˜๊ณ  ์ธ๊ณต์ง€๋Šฅ ์—์ด์ „ํŠธ์˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜ ์–ดํฌ๋˜์Šค(algorithmic affordances)๋ฅผ ๋…ผ์˜ํ•˜์˜€๋‹ค. ๋ณด์ด์ง€ ์•Š๋Š” ์ฑ—๋ด‡ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ์˜ํ•ด ์‚ฌ์šฉ์ž์˜ ์„ฑ์ฐฐ์ด ์œ ๋„๋  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์€ ๊ธฐ์กด์˜ ์ธ๊ฐ„-์ปดํ“จํ„ฐ ์ƒํ˜ธ์ž‘์šฉ์—์„œ ๊ฐ•์กฐ๋˜๋Š” ์‚ฌ์šฉ์ž ์ œ์–ด์™€ ๋””์ž์ธ ํˆฌ๋ช…์„ฑ์—์„œ ์ „๋ณต์„ ์ดˆ๋ž˜ํ•˜๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ๋ณด์ผ ์ˆ˜ ์žˆ์œผ๋‚˜, ์ƒ์ง•์  ์ƒํ˜ธ์ž‘์šฉ(symbolic interaction)์˜ ๋งฅ๋ฝ์—์„œ ์˜คํžˆ๋ ค ์‚ฌ์šฉ์ž๊ฐ€ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ์˜ํ•ด ์ง€๋‚˜๊ฐ„ ๊ณผ๊ฑฐ์— ๋Œ€ํ•œ ์ƒˆ๋กœ์šด ์˜๋ฏธ๋ฅผ ์ ๊ทน ํƒ์ƒ‰ํ•ด๋‚˜๊ฐ€๋Š” ๊ณผ์ •์ด ๋  ์ˆ˜ ์žˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ์ด๊ฒƒ์„ ์ƒˆ๋กœ์šด ๋””์ž์ธ ๋ฉ”ํƒ€ํฌ, ์ฆ‰ ์˜๋ฏธ-๋งŒ๋“ค๊ธฐ(meaning-making)๋กœ ์ œ์•ˆํ•˜๊ณ  ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๋„›์ง€(nudge)์— ์˜ํ•œ ์‚ฌ์šฉ์ž์˜ ์ฃผ๊ด€์  ํ•ด์„ ๊ฒฝํ—˜(interpretive process)์„ ๊ฐ•์กฐํ•œ๋‹ค. ์ด๊ฒƒ์€ ํ•˜๋‚˜์˜ ์ฑ—๋ด‡ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด๋ผ ํ• ์ง€๋ผ๋„ ์„œ๋กœ ๋‹ค๋ฅธ ์‚ฌ์šฉ์ž์˜ ๋‹ค์–‘ํ•œ ์„ฑ์ฐฐ ๊ฒฝํ—˜์„ ์œ ๋„ํ•ด๋‚ผ ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•˜๋ฉฐ, ์ด๋Ÿฌํ•œ ๋งฅ๋ฝ์—์„œ ์ธ๊ณต์ง€๋Šฅ์€ ๊ธฐ์กด์˜ ๋ธ”๋ž™ ๋ฐ•์Šค๋ฅผ ์œ ์ง€ํ•˜๋ฉด์„œ๋„ ์‚ฌ์šฉ์ž์˜ ์ž์œจ์„ฑ์„ ๋ณด์žฅํ•  ์ˆ˜ ์žˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ์šฐ๋ฆฌ์™€ ํ˜‘์—…ํ•˜๋Š” ์ธ๊ณต์ง€๋Šฅ ์ฑ—๋ด‡ ๊ธฐ์ˆ ์˜ ๋””์ž์ธ์— ๋Œ€ํ•œ ๊ฒฝํ—˜์  ์ดํ•ด๋ฅผ ๋†’์ด๊ณ , ์ด๋ก ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ์ฑ—๋ด‡์„ ๊ตฌํ˜„ํ•จ์œผ๋กœ์จ ๋””์ž์ธ ์ „๋žต์— ๋Œ€ํ•œ ์‹ค์ฆ์  ๊ทผ๊ฑฐ๋ฅผ ์ œ์‹œํ•œ๋‹ค. ๋˜ํ•œ ์ž์•„ ์„ฑ์ฐฐ ๊ณผ์ •์— ๋™ํ–‰ํ•˜๋Š” ๋™๋ฐ˜์ž(companion)๋กœ์„œ์˜ ๊ธฐ์ˆ ๋กœ ์ƒˆ๋กœ์šด ๋””์ž์ธ ๋ฉ”ํƒ€ํฌ๋ฅผ ์ œ์‹œํ•จ์œผ๋กœ์จ ์ธ๊ฐ„์ปดํ“จํ„ฐ์ƒํ˜ธ์ž‘์šฉ(HCI)์˜ ์ด๋ก ์  ํ™•์žฅ์— ๊ธฐ์—ฌํ•˜๊ณ , ์‚ฌ์šฉ์ž์˜ ๋ถ€์ •์  ๊ฒฝํ—˜์— ๋Œ€ํ•œ ์˜๋ฏธ ์ถ”๊ตฌ๋ฅผ ๋•๋Š” ๊ด€๊ณ„์ง€ํ–ฅ์  ์ธ๊ณต์ง€๋Šฅ์œผ๋กœ์„œ ํ–ฅํ›„ ํ˜„๋Œ€์ธ์˜ ์ •์‹ ๊ฑด๊ฐ•์— ์ด๋ฐ”์ง€ํ•  ์ˆ˜ ์žˆ๋Š” ์‚ฌํšŒ์ , ์‚ฐ์—…์  ์˜์˜๋ฅผ ๊ฐ–๋Š”๋‹ค.CHAPTER 1. INTRODUCTION ๏ผ‘ 1.1. BACKGROUND AND MOTIVATION ๏ผ‘ 1.2. RESEARCH GOAL AND QUESTIONS ๏ผ• 1.2.1. Research Goal ๏ผ• 1.2.2. Research Questions ๏ผ• 1.3. MAJOR CONTRIBUTIONS ๏ผ˜ 1.4. THESIS OVERVIEW ๏ผ™ CHAPTER 2. LITERATURE REVIEW ๏ผ‘๏ผ‘ 2.1. THE REFLECTING SELF ๏ผ‘๏ผ‘ 2.1.1. Self-Reflection and Mental Wellbeing ๏ผ‘๏ผ‘ 2.1.2. The Self in Reflective Practice ๏ผ‘๏ผ• 2.1.3. Design Space ๏ผ’๏ผ’ 2.2. SELF-REFLECTION IN HCI ๏ผ’๏ผ– 2.2.1. Reflection Design in HCI ๏ผ’๏ผ– 2.2.2. HCI for Mental Wellbeing ๏ผ“๏ผ– 2.2.3. Design Opportunities ๏ผ”๏ผ 2.3. CONVERSATIONAL AGENT DESIGN ๏ผ”๏ผ’ 2.3.1. Theoretical Background ๏ผ”๏ผ’ 2.3.2. Technical Background ๏ผ”๏ผ— 2.3.3. Design Strategies ๏ผ”๏ผ™ 2.4. SUMMARY ๏ผ–๏ผ™ CHAPTER 3. DESIGNING CHATBOT FOR TRANSFORMATIVE REFLECTION ๏ผ—๏ผ‘ 3.1. DESIGN GOAL AND DECISIONS ๏ผ—๏ผ‘ 3.2. CHATBOT IMPLEMENTATION ๏ผ—๏ผ– 3.2.1. Emotional Intelligence ๏ผ—๏ผ– 3.2.2. Procedural Intelligence ๏ผ—๏ผ— 3.3. EXPERIMENTAL USER STUDY ๏ผ—๏ผ™ 3.3.1. Participants ๏ผ—๏ผ™ 3.3.2. Task ๏ผ˜๏ผ 3.3.3. Procedure ๏ผ˜๏ผ 3.3.4. Ethics Approval ๏ผ˜๏ผ 3.3.5. Surveys and Interview ๏ผ˜๏ผ‘ 3.4. RESULTS ๏ผ˜๏ผ’ 3.4.1. Survey Findings ๏ผ˜๏ผ’ 3.4.2. Qualitative Findings ๏ผ˜๏ผ“ 3.5. IMPLICATIONS ๏ผ˜๏ผ˜ 3.5.1. Articulating Hopes and Fears ๏ผ˜๏ผ™ 3.5.2. Designing for Guidance ๏ผ™๏ผ‘ 3.5.3. Rethinking Autonomy ๏ผ™๏ผ’ 3.6. SUMMARY ๏ผ™๏ผ” CHAPTER 4. DESIGNING CHATBOTS FOR EXPLAINING AND EXPLORING REFLECTIONS ๏ผ™๏ผ– 4.1. DESIGN GOAL AND DECISIONS ๏ผ™๏ผ– 4.1.1. Design Decisions for Basic Chat ๏ผ™๏ผ˜ 4.1.2. Design Decisions for Responsive Chat ๏ผ™๏ผ˜ 4.2. CHATBOT IMPLEMENTATION ๏ผ‘๏ผ๏ผ’ 4.2.1. Emotional Intelligence ๏ผ‘๏ผ๏ผ“ 4.2.2. Procedural Intelligence ๏ผ‘๏ผ๏ผ• 4.3. EXPERIMENTAL USER STUDY ๏ผ‘๏ผ๏ผ– 4.3.1. Participants ๏ผ‘๏ผ๏ผ– 4.3.2. Task ๏ผ‘๏ผ๏ผ— 4.3.3. Procedure ๏ผ‘๏ผ๏ผ— 4.3.4. Safeguarding of Study Participants and Ethics Approval ๏ผ‘๏ผ๏ผ˜ 4.3.5. Surveys and Interviews ๏ผ‘๏ผ๏ผ˜ 4.4. RESULTS ๏ผ‘๏ผ‘๏ผ‘ 4.4.1. Quantitative Findings ๏ผ‘๏ผ‘๏ผ‘ 4.4.2. Qualitative Findings ๏ผ‘๏ผ‘๏ผ˜ 4.5. IMPLICATIONS ๏ผ‘๏ผ’๏ผ— 4.5.1. Telling Stories to a Chatbot ๏ผ‘๏ผ’๏ผ˜ 4.5.2. Designing for Disclosure ๏ผ‘๏ผ“๏ผ 4.5.3. Rethinking Predictability and Transparency ๏ผ‘๏ผ“๏ผ’ 4.6. SUMMARY ๏ผ‘๏ผ“๏ผ“ CHAPTER 5. DESIGNING CHATBOTS FOR SELF-REFLECTION: SUPPORTING GUIDED DISCLOSURE ๏ผ‘๏ผ“๏ผ• 5.1. DESIGNING FOR GUIDED DISCLOSURE ๏ผ‘๏ผ“๏ผ™ 5.1.1. Chatbots as Virtual Confidante ๏ผ‘๏ผ“๏ผ™ 5.1.2. Routine and Variety in Interaction ๏ผ‘๏ผ”๏ผ‘ 5.1.3. Reflection as Continued Experience ๏ผ‘๏ผ”๏ผ” 5.2. TENSIONS IN DESIGN ๏ผ‘๏ผ”๏ผ• 5.2.1. Adaptivity ๏ผ‘๏ผ”๏ผ• 5.2.2. Autonomy ๏ผ‘๏ผ”๏ผ— 5.2.3. Algorithmic Affordance ๏ผ‘๏ผ”๏ผ˜ 5.3. MEANING-MAKING AS DESIGN METAPHOR ๏ผ‘๏ผ•๏ผ 5.3.1. Meaning in Reflection ๏ผ‘๏ผ•๏ผ‘ 5.3.2. Meaning-Making as Interaction ๏ผ‘๏ผ•๏ผ“ 5.3.3. Making Meanings with AI ๏ผ‘๏ผ•๏ผ• CHAPTER 6. CONCLUSION ๏ผ‘๏ผ•๏ผ˜ 6.1. RESEARCH SUMMARY ๏ผ‘๏ผ•๏ผ˜ 6.2. LIMITATIONS AND FUTURE WORK ๏ผ‘๏ผ–๏ผ‘ 6.3. FINAL REMARKS ๏ผ‘๏ผ–๏ผ“ BIBLIOGRAPHY ๏ผ‘๏ผ–๏ผ• ABSTRACT IN KOREAN ๏ผ‘๏ผ™๏ผ’Docto
    • โ€ฆ
    corecore