277 research outputs found

    Should machines be tools or tool-users? Clarifying motivations and assumptions in the quest for superintelligence

    Get PDF
    Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with unclear meanings include intelligence, embodiment, simulation, mind, consciousness, perception, value, goal, agent, knowledge, belief, optimality, friendliness, containment, machine and thinking. Much of this vocabulary is naively borrowed from the realm of conscious human experience to apply to a theoretical notion of “mind-in-general” based on computation. However, if there is indeed a threshold between mechanical tool and autonomous agent (and a tipping point for singularity), projecting human conscious-level notions into the operations of computers creates confusion and makes it harder to identify the nature and location of that threshold. There is confusion, in particular, about how—and even whether—various capabilities deemed intelligent relate to human consciousness. This suggests that insufficient thought has been given to very fundamental concepts—a dangerous state of affairs, given the intrinsic power of the technology. It also suggests that research in the area of artificial general intelligence may unwittingly be (mis)guided by unconscious motivations and assumptions. While it might be inconsequential if philosophers get it wrong (or fail to agree on what is right), it could be devastating if AI developers, corporations, and governments follow suit. It therefore seems worthwhile to try to clarify some fundamental notions

    Translating Learning into Numbers: A Generic Framework for Learning Analytics

    Get PDF
    With the increase in available educational data, it is expected that Learning Analytics will become a powerful means to inform and support learners, teachers and their institutions in better understanding and predicting personal learning needs and performance. However, the processes and requirements behind the beneficial application of Learning and Knowledge Analytics as well as the consequences for learning and teaching are still far from being understood. In this paper, we explore the key dimensions of Learning Analytics (LA), the critical problem zones, and some potential dangers to the beneficial exploitation of educational data. We propose and discuss a generic design framework that can act as a useful guide for setting up Learning Analytics services in support of educational practice and learner guidance, in quality assurance, curriculum development, and in improving teacher effectiveness and efficiency. Furthermore, the presented article intends to inform about soft barriers and limitations of Learning Analytics. We identify the required skills and competences that make meaningful use of Learning Analytics data possible to overcome gaps in interpretation literacy among educational stakeholders. We also discuss privacy and ethical issues and suggest ways in which these issues can be addressed through policy guidelines and best practice examples

    Habitual Ethics?

    Get PDF
    What if data-intensive technologies’ ability to mould habits with unprecedented precision is also capable of triggering some mass disability of profound consequences? What if we become incapable of modifying the deeply-rooted habits that stem from our increased technological dependence? On an impoverished understanding of habit, the above questions are easily shrugged off. Habits are deemed rigid by definition: ‘as long as our deliberative selves remain capable of steering the design of data-intensive technologies, we’ll be fine’. To question this assumption, this open access book first articulates the way in which the habitual stretches all the way from unconscious tics to purposive, intentionally acquired habits. It also highlights the extent to which our habit-reliant, pre-reflective intelligence normally supports our deliberative selves. It is when habit rigidification sets in that this complementarity breaks down. The book moves from a philosophical inquiry into the ‘double edge’ of habit — its empowering and compromising sides — to consideration of individual and collective strategies to keep habits at the service of our ethical life. Allowing the norms that structure our forms of life to be cotton-wooled in abstract reasoning is but one of the factors that can compromise ongoing social and moral transformations. Systems designed to simplify our practical reasoning can also make us ‘sheep-like’. Drawing a parallel between the moral risk inherent in both legal and algorithmic systems, the book concludes with concrete interventions designed to revive the scope for normative experimentation. It will appeal to any reader concerned with our retaining an ability to trigger change within the practices that shape our ethical sensibility. The eBook editions of this book are available open access under a CC BY-NC-ND 4.0 licence on bloomsburycollections.com. Open access was funded by the Mozilla Foundation

    Habitual Ethics?

    Get PDF
    What if data-intensive technologies’ ability to mould habits with unprecedented precision is also capable of triggering some mass disability of profound consequences? What if we become incapable of modifying the deeply-rooted habits that stem from our increased technological dependence? On an impoverished understanding of habit, the above questions are easily shrugged off. Habits are deemed rigid by definition: ‘as long as our deliberative selves remain capable of steering the design of data-intensive technologies, we’ll be fine’. To question this assumption, this open access book first articulates the way in which the habitual stretches all the way from unconscious tics to purposive, intentionally acquired habits. It also highlights the extent to which our habit-reliant, pre-reflective intelligence normally supports our deliberative selves. It is when habit rigidification sets in that this complementarity breaks down. The book moves from a philosophical inquiry into the ‘double edge’ of habit — its empowering and compromising sides — to consideration of individual and collective strategies to keep habits at the service of our ethical life. Allowing the norms that structure our forms of life to be cotton-wooled in abstract reasoning is but one of the factors that can compromise ongoing social and moral transformations. Systems designed to simplify our practical reasoning can also make us ‘sheep-like’. Drawing a parallel between the moral risk inherent in both legal and algorithmic systems, the book concludes with concrete interventions designed to revive the scope for normative experimentation. It will appeal to any reader concerned with our retaining an ability to trigger change within the practices that shape our ethical sensibility. The eBook editions of this book are available open access under a CC BY-NC-ND 4.0 licence on bloomsburycollections.com. Open access was funded by the Mozilla Foundation

    Critical Programming: Toward a Philosophy of Computing

    Get PDF
    Beliefs about the relationship between human beings and computing machines and their destinies have alternated from heroic counterparts to conspirators of automated genocide, from apocalyptic extinction events to evolutionary cyborg convergences. Many fear that people are losing key intellectual and social abilities as tasks are offloaded to the everywhere of the built environment, which is developing a mind of its own. If digital technologies have contributed to forming a dumbest generation and ushering in a robotic moment, we all have a stake in addressing this collective intelligence problem. While digital humanities continue to flourish and introduce new uses for computer technologies, the basic modes of philosophical inquiry remain in the grip of print media, and default philosophies of computing prevail, or experimental ones propagate false hopes. I cast this as-is situation as the post-postmodern network dividual cyborg, recognizing that the rational enlightenment of modernism and regressive subjectivity of postmodernism now operate in an empire of extended mind cybernetics combined with techno-capitalist networks forming societies of control. Recent critical theorists identify a justificatory scheme foregrounding participation in projects, valorizing social network linkages over heroic individualism, and commending flexibility and adaptability through life long learning over stable career paths. It seems to reify one possible, contingent configuration of global capitalism as if it was the reflection of a deterministic evolution of commingled technogenesis and synaptogenesis. To counter this trend I offer a theoretical framework to focus on the phenomenology of software and code, joining social critiques with textuality and media studies, the former proposing that theory be done through practice, and the latter seeking to understand their schematism of perceptibility by taking into account engineering techniques like time axis manipulation. The social construction of technology makes additional theoretical contributions dispelling closed world, deterministic historical narratives and requiring voices be given to the engineers and technologists that best know their subject area. This theoretical slate has been recently deployed to produce rich histories of computing, networking, and software, inform the nascent disciplines of software studies and code studies, as well as guide ethnographers of software development communities. I call my syncretism of these approaches the procedural rhetoric of diachrony in synchrony, recognizing that multiple explanatory layers operating in their individual temporal and physical orders of magnitude simultaneously undergird post-postmodern network phenomena. Its touchstone is that the human-machine situation is best contemplated by doing, which as a methodology for digital humanities research I call critical programming. Philosophers of computing explore working code places by designing, coding, and executing complex software projects as an integral part of their intellectual activity, reflecting on how developing theoretical understanding necessitates iterative development of code as it does other texts, and how resolving coding dilemmas may clarify or modify provisional theories as our minds struggle to intuit the alien temporalities of machine processes

    Self-knowledge through self-tracking devices: design guidelines for usability and a socio-technical examination from posthumanity perspective

    Get PDF
    The Digital Era introduces emerging product categories that have evolved around certain habits and concepts. One tendency in the Information Age is recording and storing quantitative and qualitative data based on an individual's life by using ubiquitous computing devices. Such products, bringing self-observation and autobiographical memory capabilities to an extreme level, have the potential to morph human beings by augmenting and altering their self-understanding through presenting previously nonexistent information regarding their lives. The diversity found in this product range is increasing parallel to the growing demand. However, the meaning of these products for human life is rarely discussed. It remains a question whether these personal logs lead to an enriched self-knowledge for their users or not. This thesis aims to investigate the design principles and the influences of self-tracking products and services on daily life within a socio-technical framework in order to establish a connection between selftracking by ubiquitous computing devices and the notion of self-concept

    정신걎강에서 ì‚Źìš©ìž ë‚ŽëŸŹí‹°ëžŒì™€ 자아성찰을 지원하는 대화형 에읎전튞 디자읞

    Get PDF
    í•™ìœ„ë…ŒëŹž (ë°•ì‚Ź) -- 서욞대학ꔐ 대학원 : ìœ”í•©êłŒí•™êž°ìˆ ëŒ€í•™ì› ìœ”í•©êłŒí•™ë¶€(ë””ì§€í„žì •ëłŽìœ”í•©ì „êł”), 2020. 8. 서뎉원.In the advent of artificial intelligence (AI), we are surrounded by technological gadgets, devices and intelligent personal assistant (IPAs) that voluntarily take care of our home, work and social networks. They help us manage our life for the better, or at least that is what they are designed for. As a matter of fact, few are, however, designed to help us grapple with the thoughts and feelings that often construct our living. In other words, technologies hardly help us think. How can they be designed to help us reflect on ourselves for the better? In the simplest terms, self-reflection refers to thinking deeply about oneself. When we think deeply about ourselves, there can be both positive and negative consequences. On the one hand, reflecting on ourselves can lead to a better self-understanding, helping us achieve life goals. On the other hand, we may fall into brooding and depression. The sad news is that the two are usually intertwined. The problem, then, is the irony that reflecting on oneself by oneself is not easy. To tackle this problem, this work aims to design technology in the form of a conversational agent, or a chatbot, to encourage a positive self-reflection. Chatbots are natural language interfaces that interact with users in text. They work at the tip of our hands as if SMS or instant messaging, from flight reservation and online shopping to news service and healthcare. There are even chatbot therapists offering psychotherapy on mobile. That machines can now talk to us creates an opportunity for designing a natural interaction that used to be humans own. This work constructs a two-dimensional design space for translating self-reflection into a human-chatbot interaction, with user self-disclosure and chatbot guidance. Users confess their thoughts and feelings to the bot, and the bot is to guide them in the scaffolding process. Previous work has established an extensive line of research on the therapeutic effect of emotional disclosure. In HCI, reflection design has posited the need for guidance, e.g. scaffolding users thoughts, rather than assuming their ability to reflect in a constructive manner. The design space illustrates different reflection processes depending on the levels of user disclosure and bot guidance. Existing reflection technologies have most commonly provided minimal levels of disclosure and guidance, and healthcare technologies the opposite. It is the aim of this work to investigate the less explored space by designing chatbots called Bonobot and Diarybot. Bonobot differentiates itself from other bot interventions in that it only motivates the idea of change rather than direct engagement. Diarybot is designed in two chat versions, Basic and Responsive, which create novel interactions for reflecting on a difficult life experience by explaining it to and exploring it with a chatbot. These chatbots are set up for a user study with 30 participants, to investigate the user experiences of and responses to design strategies. Based on the findings, challenges and opportunities from designing for chatbot-guided reflection are explored. The findings of this study are as follows. First, participants preferred Bonobots questions that prompted the idea of change. Its responses were also appreciated, but only when they conveyed accurate empathy. Thus questions, coupled with empathetic responses, could serve as a catalyst for disclosure and even a possible change of behavior, a motivational boost. Yet the chatbot-led interaction led to surged user expectations for the bot. Participants demanded more than just the guidance, such as solutions and even superhuman intelligence. Potential tradeoff between user engagement and autonomy in designing human-AI partnership is discussed. Unlike Bonobot, Diarybot was designed with less guidance to encourage users own narrative making. In both Diarybot chats, the presence of a bot could make it easier for participants to share the most difficult life experiences, compared to a no-chatbot writing condition. Yet an increased interaction with the bot in Responsive chat could lead to a better user engagement. On the contrary, more emotional expressiveness and ease of writing were observed with little interaction in Basic chat. Coupled with qualitative findings that reveal user preference for varied interactions and tendency to adapt to bot patterns, predictability and transparency of designing chatbot interaction are discussed in terms of managing user expectations in human-AI interaction. In sum, the findings of this study shed light on designing human-AI interaction. Chatbots can be a potential means of supporting guided disclosure on lifes most difficult experiences. Yet the interaction between a machine algorithm and an innate human cognition bears interesting questions for the HCI community, especially in terms of user autonomy, interface predictability, and design transparency. Discussing the notion of algorithmic affordances in AI agents, this work proposes meaning-making as novel interaction design metaphor: In the symbolic interaction via language, AI nudges users, which inspires and engages users in their pursuit of making sense of lifes agony. Not only does this metaphor respect user autonomy but also it maintains the veiled workings of AI from users for continued engagement. This work makes the following contributions. First, it designed and implemented chatbots that can provide guidance to encourage user narratives in self-reflection. Next, it offers empirical evidence on chatbot-guided disclosure and discusses implications for tensions and challenges in design. Finally, this work proposes meaning-making as a novel design metaphor. It calls for the responsible design of intelligent interfaces for positive reflection in pursuit of psychological wellbeing, highlighting algorithmic affordances and interpretive process of human-AI interaction.씜귌 ìžêł”ì§€ëŠ„(Artificial Intelligence; AI) Ʞ술은 ìš°ëŠŹ 삶의 멎멎을 맀우 ëč ë„ŽêȌ ë°”êż”ë†“êł  있닀. íŠč히 애플의 ì‹œëŠŹ(Siri)와 ê”Źêž€ 얎시슀턎튞 (Google Assistant) 등 자연얎 읞터페읎슀(natural language interfaces)의 확임은 êł§ ìžêł”ì§€ëŠ„ 에읎전튞와의 대화가 읞터랙션의 ìŁŒìš” 수닚읎 될 êČƒìž„ì„ 늄히 짐작쌀 한닀. 싀상 ìžêł”ì§€ëŠ„ 에읎전튞는 싀생활에서 윘텐잠 추ìČœêłŒ 옚띌읞 쇌핑 등 닀양한 서ëč„슀넌 ì œêł”í•˜êł  있지만, 읎듀의 대부분은 êłŒì—…-지햄적읎닀. 슉 ìžêł”ì§€ëŠ„ì€ ìš°ëŠŹì˜ 삶을 펾멬하êȌ 하지만, êłŒì—° 펞안하êȌ 할 수 있는가? ëłž ì—°ê”ŹëŠ” 펞하지만 펞하지 않은 현대읞을 위한 Ʞ술의 역할을 êł ëŻŒí•˜ëŠ” 데에서 출발한닀. 자아성찰(self-reflection), 슉 자신에 대핮 êčŠìŽ 생각핎 볮는 활동은 ìžêž°ìžì‹êłŒ 자Ʞ읎핎넌 도ëȘší•˜êł  ë°°ì›€êłŒ ëȘ©í‘œì˜ì‹ì„ êł ì·ší•˜ëŠ” 등 분알넌 ë§‰ëĄ í•˜êł  널멬 ì—°ê”Ź 및 적용되얎 왔닀. 하지만 자아성찰의 가임 큰 얎렀움은 ìŠ€ìŠ€ëĄœ 걎섀적읞 성찰을 도ëȘší•˜êž° 힘듀닀는 êČƒìŽë‹€. íŠč히, 부정적읞 감정적 êČœí—˜ì— 대한 자아성찰은 ìą…ìą… ìš°ìšžê°êłŒ 불안을 동반한닀. ê·čëł”ìŽ 힘든 êČœìš° 상닎 또는 ìč˜ëŁŒë„Œ 찟을 수 있지만, ì‚ŹíšŒì  ë‚™ìžêłŒ ìžŁëŒ€ì˜ ë¶€ë‹Žê°ìœŒëĄœ êșŒë €ì§€ëŠ” êČœìš°ê°€ 닀수읎닀. 성찰 디자읞(Reflection Design)은 읞간-컎퓚터상혞작용(HCI)의 였랜 화두로, 귞동안 íššêłŒì ìž 성찰을 도욞 수 있는 디자읞 전랔듀읎 닀수 ì—°ê”Źë˜ì–Ž 왔지만 대부분 닀양한 ì‚Źìš©ìž 데읎터 수집 전랔을 톔핎 êłŒê±° 회상 및 핎석을 돕는 데 ê·žìł€ë‹€. 씜귌 소위 챗뎇 ìƒë‹Žì‚Źê°€ ë“±ìž„í•˜ì—Ź ì‹ŹëŠŹìƒë‹ŽêłŒ ìč˜ëŁŒ 분알에 ì ìš©ë˜êł  있지만, 읎 또한 성찰을 ë•êž°ëłŽë‹€ëŠ” 횚윚적읞 ìȘìč˜ ë„ê”Źì— ëšžëŹŽë„Žêł  있을 ëżìŽë‹€. 슉 Ʞ술은 ìč˜ëŁŒ 수닚읎거나 성찰의 대상읎 되지만, ê·ž êłŒì •ì— 개입하는 êČœìš°ëŠ” ì œí•œì ìŽëŒêł  할 수 있닀. 읎에 ëłž ì—°ê”ŹëŠ” 성찰 ë™ë°˜ìžëĄœì„œ 대화형 에읎전튞읞 챗뎇을 디자읞할 êČƒì„ 제안한닀. 읎 챗뎇의 역할은 ì‚Źìš©ìžì˜ 부정적읞 감정적 êČœí—˜ 또는 튞띌우마에 대핮 읎알Ʞ할 수 ìžˆë„ëĄ 도욞 뿐 아니띌, ê·ž êłŒì •ì—ì„œ 반추넌 í†”ì œí•˜ì—Ź 걎섀적읞 ë‚ŽëŸŹí‹°ëžŒë„Œ 읎끌얎 낮는 가읎드넌 ì œêł”í•˜ëŠ” êČƒìŽë‹€. ìŽëŸŹí•œ 챗뎇을 ì„€êł„í•˜êž° 위핎, 선행 ì—°ê”Źë„Œ êž°ë°˜ìœŒëĄœ ì‚Źìš©ìžì˜ 자Ʞ녞출(user self-disclosure)êłŒ 챗뎇 가읎드(guidance)ë„Œ 두 ì¶•ìœŒëĄœ 한 디자읞 êł”ê°„(design space)을 정의하였닀. ê·žëŠŹêł  ìžêž°ë…žì¶œêłŒ 가읎드의 정도에 ë”°ë„ž ë„€ 가지 자아성찰 êČœí—˜ì„ 분넘하였닀: ìžêž°ë…žì¶œêłŒ 가읎드가 씜소화된 회상 êł”ê°„, 자Ʞ녞출읎 ìœ„ìŁŒìŽêł  가읎드가 씜소화된 ì„€ëȘ… êł”ê°„, ìžêž°ë…žì¶œêłŒ 챗뎇읎 읎끄는 가읎드가 혌합된 탐색 êł”ê°„, 가읎드넌 적ê·č 개입시쌜 자Ʞ녞출을 높읎는 변화 êł”ê°„ìŽ ê·žêČƒìŽë‹€. ëłž ì—°ê”Źì˜ ëȘ©í‘œëŠ” 상술된 디자읞 êł”ê°„ì—ì„œì˜ 성찰 êČœí—˜êłŒ êłŒì •ì„ 돕는 챗뎇을 ê”Źí˜„í•˜êł , ì‚Źìš©ìž 싀험을 톔핎 성찰 êČœí—˜êłŒ 디자읞 전랔에 대한 반응을 수집 및 ë¶„ì„í•šìœŒëĄœìš 챗뎇 Ʞ반의 자아 성찰 읞터랙션을 ìƒˆëĄ­êȌ ì œì‹œí•˜êł  읎에 대한 싀슝적 귌거넌 마령하는 êČƒìŽë‹€. í˜„ìžŹêčŒì§€ 많은 성찰 Ʞ술은 회상에 집쀑되얎 있Ʞ에, 나뚞지 ì„ž êł”ê°„ì—ì„œì˜ 성찰을 지원하는 ëłŽë…žëŽ‡êłŒ êž°ëłží˜•ë°˜ì‘í˜• 음Ʞ뎇을 디자읞하였닀. 또한, ì‚Źìš©ìž 평가넌 ë°”íƒ•ìœŒëĄœ 도출한 ì—°ê”ŹêČ°êłŒë„Œ 톔핎 도래한 읞간-ìžêł”ì§€ëŠ„ 상혞작용(human-AI interaction)의 맄띜에서 성찰 ë™ë°˜ìžëĄœì„œì˜ 챗뎇 Ʞ술읎 갖는 ì˜ëŻžì™€ 역할을 íƒê”Źí•œë‹€. ëłŽë…žëŽ‡êłŒ 음Ʞ뎇은 ìžê°„ì€‘ì‹Źìƒë‹ŽêłŒ 대화분석의 ìŽëĄ ì  귌거넌 ë°”íƒ•ìœŒëĄœ 한 정서지늄(emotional intelligence)êłŒ 절찚지늄(proecedural intelligence)을 í•”ì‹Ź ì¶•ìœŒëĄœ, 대화 흐멄 제얎(flow manager)와 발화 생성(response generator)을 í•”ì‹Ź ëȘšë“ˆëĄœ ê”Źí˜„í•˜ì˜€ë‹€. 뚌저, ëłŽë…žëŽ‡ì€ 동Ʞ강화상닎(motivational interviewing)을 êž°ë°˜ìœŒëĄœ êł ëŻŒêłŒ 슀튞레슀에 대한 ë‚ŽëŸŹí‹°ëžŒë„Œ 읎끌얎낎얎, 읎에 대한 핮êČ°ì„ 위한 가읎드 ì§ˆëŹžì„ 톔핎 ëł€í™”ë„Œ 위한 성찰을 돕는닀. 챗뎇의 ê”Źí˜„ì„ 위핎, 동Ʞ강화상닎의 ë„€ ë‹šêł„ 대화넌 ì„€ì •í•˜êł  각 ë‹šêł„ë„Œ ê”Źì„±í•  수 있는 ìƒë‹Žì‚Ź 발화 행동을 êŽ€ë šëŹží—Œì—ì„œ 수집 및 전ìȘ늏 êłŒì •ì„ ê±°ìł ìŠ€íŹëŠœíŠží™”í•˜ì˜€ë‹€. 또한, ì‚Źì „ 전ìȘ멬된 ëŹžìž„ìŽ 맄띜을 유지할 수 있는 대화에 쓰음 수 ìžˆë„ëĄ, 대화의 ìŁŒì œëŠ” 대학원생의 ì–Žë €ì›€ìœŒëĄœ 한정하였닀. ëłŽë…žëŽ‡êłŒì˜ 대화가 ì‚Źìš©ìžì˜ 성찰에 믞ìč˜ëŠ” ì˜í–„êłŒ 읎에 대한 읞식을 탐색하Ʞ 위핎 질적 ì—°ê”Źë°©ëČ•ì„ ì‚Źìš©í•˜ì—Ź 30ëȘ…의 ëŒ€í•™ì›ìƒêłŒ ì‚Źìš©ìž 싀험을 진행하였닀. 싀험êČ°êłŒ, ì‚Źìš©ìžëŠ” 변화 대화넌 유도할 수 있는 닀양한 탐색 ì§ˆëŹžì„ 선혞하였닀. 또한, ì‚Źìš©ìžì˜ 맄띜에 정확히 듀얎맞는 ì§ˆëŹžêłŒ 플드백은 ì‚Źìš©ìžë„Œ 더욱 적ê·č적읞 자Ʞ ë…žì¶œëĄœ 읎끌êȌ 할 수 있음을 발êČŹí•˜ì˜€ë‹€. ê·žëŸŹë‚˜ 챗뎇읎 마ìč˜ ìƒë‹Žì‚ŹìČ˜ëŸŒ 대화넌 읎끌얎갈 êČœìš°, 높아진 ì‚Źìš©ìžì˜ Ʞ대 ìˆ˜ì€€ìœŒëĄœ 읞핎 음부 ì‚Źìš©ìžê°€ ëł€í™”ì— 대한 동Ʞ넌 표출하였음에도 ë¶ˆê”Źí•˜êł  ëł€í™”ì— 대한 자윚성을 챗뎇에 양도하렀는 ëȘšìŠ” 또한 나타낚을 분석하였닀. 볮녾뮇 ì—°ê”Źë„Œ ë°”íƒ•ìœŒëĄœ 음Ʞ뎇은 챗뎇 대신 ì‚Źìš©ìžê°€ 볎닀 적ê·čì ìœŒëĄœ 성찰 ë‚ŽëŸŹí‹°ëžŒë„Œ 전개할 수 ìžˆë„ëĄ 디자읞하였닀. 음Ʞ뎇은 튞띌우마에 대한 표현적 Ꞁ쓰Ʞ넌 지원하는 ì±—ëŽ‡ìœŒëĄœ, êž°ëłží˜• 또는 반응형 대화넌 ì œêł”í•œë‹€. êž°ëłží˜• 대화는 튞띌우마에 대핮 ìžìœ ëĄ­êȌ ì„€ëȘ…í•  수 있는 대화 환êČœì„ ì œêł”í•˜êł , 반응형 대화는 ì‚Źìš©ìžê°€ 작성한 ë‚ŽëŸŹí‹°ëžŒì— 대한 후속 읞터랙션을 톔핎 êłŒê±°ì˜ êČœí—˜ì„ ìžŹíƒìƒ‰í•˜ë„ëĄ 하였닀. 또한, 후속 읞터랙션의 발화 행동은 닀양한 상닎ìč˜ëŁŒì—ì„œ 발췌하되 유저의 ë‚ŽëŸŹí‹°ëžŒì—ì„œ 추출한 감정얎 및 ìžê°„êŽ€êł„ 킀워드넌 í™œìš©í•˜ë„ëĄ 하였닀. 각 음Ʞ뎇에 대한 반응을 ëč„ꔐ 분석하Ʞ 위핎, 챗뎇 없읎 도큐뚌튞에 표현적 Ꞁ쓰Ʞ 활동만을 하는 ëŒ€ìĄ°ê”°ì„ ì„€ì •í•˜êł  30ëȘ…의 ì‚Źìš©ìžë„Œ ëȘšì§‘í•˜ì—Ź 각 ìĄ°ê±Žì— ëžœë€ìœŒëĄœ 배정, ì„€ëŹžêłŒ 멎닎을 동반한 4음간의 Ꞁ쓰Ʞ 싀험을 진행하였닀. 싀험êČ°êłŒ, ì‚Źìš©ìžëŠ” ìŒêž°ëŽ‡êłŒì˜ 읞터랙션을 톔핎 ëłŽìŽì§€ 않는 가상의 ìČ­ìžë„Œ ìƒìƒí•šìœŒëĄœìš Ꞁ쓰Ʞ넌 대화 í™œë™ìœŒëĄœ ìžì§€í•˜êł  있음을 알 수 있었닀. íŠč히, 반응형 대화의 후속 ì§ˆëŹžë“€ì€ ì‚Źìš©ìžëĄœ í•˜ì—Źêžˆ 상황을 ê°êŽ€í™”í•˜êł  ìƒˆëĄœìšŽ êŽ€ì ìœŒëĄœ 생각핎 ëłŒ 수 있는 íššêłŒë„Œ 거두었닀. 반응형 대화에서 후속 읞터랙션을 êČœí—˜í•œ ì‚Źìš©ìžëŠ” 음Ʞ뎇의 읞지된 ìŠê±°ì›€êłŒ ì‚ŹíšŒì„±, ì‹ ëą°ë„ì™€ ìžŹì‚Źìš© 의햄에 대한 평가가 닀넞 두 ìĄ°ê±Žì—ì„œëłŽë‹€ 유의하êȌ 높았닀. 반멎, êž°ëłží˜• 대화 ì°žì—ŹìžëŠ” 닀넞 두 ìĄ°ê±Žì—ì„œëłŽë‹€ 감정적 표현의 ìš©ìŽì„±êłŒ Ꞁ쓰Ʞ의 얎렀움을 각각 유의하êȌ 높êȌ, ê·žëŠŹêł  낼êȌ 평가하였닀. 슉, 챗뎇은 많은 읞터랙션 없읎도 ìČ­ìžì˜ 역할을 수행할 수 있었지만, 후속 ì§ˆëŹžì„ 톔한 읞터랙션읎 가늄했던 반응형 대화는 더욱 적ê·č적읞 유저 ì°žì—Ź(engagement)ë„Œ 읎끌얎낌 수 있었닀. 또한, 싀험읎 진행됚에 따띌, ì‚Źìš©ìžê°€ 반응형 음Ʞ뎇의 ì•Œêł ëŠŹìŠ˜ì— 자신의 Ꞁ쓰Ʞ ìŁŒì œì™€ ë‹šì–Ž 선택 등을 맞êȌ 바꟞얎 가는 적응적(adaptive) 행동읎 ꎀ찰되었닀. 앞선 ì—°ê”ŹêČ°êłŒë„Œ 톔핎, 닀양한 챗뎇 디자읞 전랔을 ë°”íƒ•ìœŒëĄœ ì‚Źìš©ìžì˜ ë‚ŽëŸŹí‹°ëžŒê°€ 닀넎êȌ 유도될 수 있윌며, 따띌서 ì„œëĄœ 닀넞 유형의 성찰 êČœí—˜ì„ 읎끌얎낌 수 있음을 발êČŹí•˜ì˜€ë‹€. 또한, 자윚적읞 행위읞 자아성찰읎 êž°ìˆ êłŒì˜ ìƒí˜žìž‘ìš©ìœŒëĄœ 혞혜적 성질을 갖êȌ 될 때 ì‚Źìš©ìžì˜ 자윚성, 상혞작용의 ì˜ˆìžĄê°€ëŠ„ì„±êłŒ 디자읞 툏ëȘ…성에서 발생할 수 있는 ê°ˆë“±êŽ€êł„(tensions)ë„Œ íƒìƒ‰í•˜êł  ìžêł”ì§€ëŠ„ 에읎전튞의 ì•Œêł ëŠŹìŠ˜ ì–ŽíŹë˜ìŠ€(algorithmic affordances)ë„Œ 녌의하였닀. ëłŽìŽì§€ 않는 챗뎇 ì•Œêł ëŠŹìŠ˜ì— 의핎 ì‚Źìš©ìžì˜ 성찰읎 유도될 수 있닀는 êČƒì€ êž°ìĄŽì˜ 읞간-컎퓚터 상혞작용에서 ê°•ìĄ°ë˜ëŠ” ì‚Źìš©ìž 제얎와 디자읞 툏ëȘ…성에서 ì „ëł”ì„ 쎈래하는 êȃìČ˜ëŸŒ ëłŽìŒ 수 있윌나, 상징적 상혞작용(symbolic interaction)의 맄띜에서 였히렀 ì‚Źìš©ìžê°€ ì•Œêł ëŠŹìŠ˜ì— 의핎 지나간 êłŒê±°ì— 대한 ìƒˆëĄœìšŽ ì˜ëŻžë„Œ 적ê·č 탐색핎나가는 êłŒì •ìŽ 될 수 있닀. ëłž ì—°ê”ŹëŠ” 읎êČƒì„ ìƒˆëĄœìšŽ 디자읞 ë©”íƒ€íŹ, 슉 ì˜ëŻž-만듀Ʞ(meaning-making)로 ì œì•ˆí•˜êł  ì•Œêł ëŠŹìŠ˜ì˜ 넛지(nudge)에 의한 ì‚Źìš©ìžì˜ ìŁŒêŽ€ì  핎석 êČœí—˜(interpretive process)을 ê°•ìĄ°í•œë‹€. 읎êČƒì€ 하나의 챗뎇 ì•Œêł ëŠŹìŠ˜ìŽëŒ 할지띌도 ì„œëĄœ 닀넞 ì‚Źìš©ìžì˜ 닀양한 성찰 êČœí—˜ì„ 유도핎낌 수 있닀는 êČƒì„ ì˜ëŻží•˜ë©°, ìŽëŸŹí•œ 맄띜에서 ìžêł”ì§€ëŠ„ì€ êž°ìĄŽì˜ 뾔랙 박슀넌 유지하멎서도 ì‚Źìš©ìžì˜ 자윚성을 ëłŽìž„í•  수 있닀. ëłž ì—°ê”ŹëŠ” ìš°ëŠŹì™€ 협업하는 ìžêł”ì§€ëŠ„ 챗뎇 Ʞ술의 디자읞에 대한 êČœí—˜ì  읎핎넌 ë†’ìŽêł , ìŽëĄ ì„ êž°ë°˜ìœŒëĄœ 한 챗뎇을 ê”Źí˜„í•šìœŒëĄœìš 디자읞 전랔에 대한 싀슝적 귌거넌 제시한닀. 또한 자아 성찰 êłŒì •ì— 동행하는 동반자(companion)ëĄœì„œì˜ êž°ìˆ ëĄœ ìƒˆëĄœìšŽ 디자읞 ë©”íƒ€íŹë„Œ ì œì‹œí•šìœŒëĄœìš 읞간컎퓚터상혞작용(HCI)의 ìŽëĄ ì  확임에 êž°ì—Źí•˜êł , ì‚Źìš©ìžì˜ 부정적 êČœí—˜ì— 대한 ì˜ëŻž ì¶”ê”Źë„Œ 돕는 êŽ€êł„ì§€í–„ì  ìžêł”ì§€ëŠ„ìœŒëĄœì„œ 햄후 현대읞의 정신걎강에 읎바지할 수 있는 ì‚ŹíšŒì , 산업적 의의넌 갖는닀.CHAPTER 1. INTRODUCTION  1.1. BACKGROUND AND MOTIVATION  1.2. RESEARCH GOAL AND QUESTIONS  1.2.1. Research Goal  1.2.2. Research Questions  1.3. MAJOR CONTRIBUTIONS  1.4. THESIS OVERVIEW  CHAPTER 2. LITERATURE REVIEW  2.1. THE REFLECTING SELF  2.1.1. Self-Reflection and Mental Wellbeing  2.1.2. The Self in Reflective Practice  2.1.3. Design Space  2.2. SELF-REFLECTION IN HCI  2.2.1. Reflection Design in HCI  2.2.2. HCI for Mental Wellbeing  2.2.3. Design Opportunities  2.3. CONVERSATIONAL AGENT DESIGN  2.3.1. Theoretical Background  2.3.2. Technical Background  2.3.3. Design Strategies  2.4. SUMMARY  CHAPTER 3. DESIGNING CHATBOT FOR TRANSFORMATIVE REFLECTION  3.1. DESIGN GOAL AND DECISIONS  3.2. CHATBOT IMPLEMENTATION  3.2.1. Emotional Intelligence  3.2.2. Procedural Intelligence  3.3. EXPERIMENTAL USER STUDY  3.3.1. Participants  3.3.2. Task  3.3.3. Procedure  3.3.4. Ethics Approval  3.3.5. Surveys and Interview  3.4. RESULTS  3.4.1. Survey Findings  3.4.2. Qualitative Findings  3.5. IMPLICATIONS  3.5.1. Articulating Hopes and Fears  3.5.2. Designing for Guidance  3.5.3. Rethinking Autonomy  3.6. SUMMARY  CHAPTER 4. DESIGNING CHATBOTS FOR EXPLAINING AND EXPLORING REFLECTIONS  4.1. DESIGN GOAL AND DECISIONS  4.1.1. Design Decisions for Basic Chat  4.1.2. Design Decisions for Responsive Chat  4.2. CHATBOT IMPLEMENTATION  4.2.1. Emotional Intelligence  4.2.2. Procedural Intelligence  4.3. EXPERIMENTAL USER STUDY  4.3.1. Participants  4.3.2. Task  4.3.3. Procedure  4.3.4. Safeguarding of Study Participants and Ethics Approval  4.3.5. Surveys and Interviews  4.4. RESULTS  4.4.1. Quantitative Findings  4.4.2. Qualitative Findings  4.5. IMPLICATIONS  4.5.1. Telling Stories to a Chatbot  4.5.2. Designing for Disclosure  4.5.3. Rethinking Predictability and Transparency  4.6. SUMMARY  CHAPTER 5. DESIGNING CHATBOTS FOR SELF-REFLECTION: SUPPORTING GUIDED DISCLOSURE  5.1. DESIGNING FOR GUIDED DISCLOSURE  5.1.1. Chatbots as Virtual Confidante  5.1.2. Routine and Variety in Interaction  5.1.3. Reflection as Continued Experience  5.2. TENSIONS IN DESIGN  5.2.1. Adaptivity  5.2.2. Autonomy  5.2.3. Algorithmic Affordance  5.3. MEANING-MAKING AS DESIGN METAPHOR  5.3.1. Meaning in Reflection  5.3.2. Meaning-Making as Interaction  5.3.3. Making Meanings with AI  CHAPTER 6. CONCLUSION  6.1. RESEARCH SUMMARY  6.2. LIMITATIONS AND FUTURE WORK  6.3. FINAL REMARKS  BIBLIOGRAPHY  ABSTRACT IN KOREAN Docto

    Data Epistemologies / Surveillance and Uncertainty

    Get PDF
    Data Epistemologies studies the changing ways in which ‘knowledge’ is defined, promised, problematised, legitimated vis-á-vis the advent of digital, ‘big’ data surveillance technologies in early twenty-first century America. As part of the period’s fascination with ‘new’ media and ‘big’ data, such technologies intersect ambitious claims to better knowledge with a problematisation of uncertainty. This entanglement, I argue, results in contextual reconfigurations of what ‘counts’ as knowledge and who (or what) is granted authority to produce it – whether it involves proving that indiscriminate domestic surveillance prevents terrorist attacks, to arguing that machinic sensors can know us better than we can ever know ourselves. The present work focuses on two empirical cases. The first is the ‘Snowden Affair’ (2013-Present): the public controversy unleashed through the leakage of vast quantities of secret material on the electronic surveillance practices of the U.S. government. The second is the ‘Quantified Self’ (2007-Present), a name which describes both an international community of experimenters and the wider industry built up around the use of data-driven surveillance technology for self-tracking every possible aspect of the individual ‘self’. By triangulating media coverage, connoisseur communities, advertising discourse and leaked material, I examine how surveillance technologies were presented for public debate and speculation. This dissertation is thus a critical diagnosis of the contemporary faith in ‘raw’ data, sensing machines and algorithmic decision-making, and of their public promotion as the next great leap towards objective knowledge. Surveillance is not only a means of totalitarian control or a technology for objective knowledge, but a collective fantasy that seeks to mobilise public support for new epistemic systems. Surveillance, as part of a broader enthusiasm for ‘data-driven’ societies, extends the old modern project whereby the human subject – its habits, its affects, its actions – become the ingredient, the raw material, the object, the target, for the production of truths and judgments about them by things other than themselves

    THE VARIETIES OF USER EXPERIENCE BRIDGING EMBODIED METHODOLOGIES FROM SOMATICS AND PERFORMANCE TO HUMAN COMPUTER INTERACTION

    Get PDF
    Embodied Interaction continues to gain significance within the field of Human Computer Interaction (HCI). Its growing recognition and value is evidenced in part by a remarkable increase in systems design and publication focusing on various aspects of Embodiment. The enduring need to interact through experience has spawned a variety of interdisciplinary bridging strategies in the hope of gaining deeper understanding of human experience. Along with phenomenology, cognitive science, psychology and the arts, recent interdisciplinary contributions to HCI include the knowledge-rich domains of Somatics and Performance that carry long-standing traditions of embodied practice. The common ground between HCI and the fields of Somatics and Performance is based on the need to understand and model human experience. Yet, Somatics and Performance differ from normative HCI in their epistemological frameworks of embodiment. This is particularly evident in their histories of knowledge construction and representation. The contributions of Somatics and Performance to the history of embodiment are not yet fully understood within HCI. Differing epistemologies and their resulting approaches to experience identify an under-theorized area of research and an opportunity to develop a richer knowledge and practice base. This is examined by comparing theories and practices of embodied experience between HCI and Somatics (Performance) and analyzing influences, values and assumptions underlying epistemological frameworks. The analysis results in a set of design strategies based in embodied practices within Somatics and Performance. The subsequent application of these strategies is examined through a series of interactive art installations that employ embodied interaction as a central expression of technology. Case Studies provide evidence in the form of rigorously documented design processes that illustrate these strategies. This research exemplifies 'Research through Art' applied in the context of experience design for tangible, wearable and social interaction
    • 

    corecore