277 research outputs found
Should machines be tools or tool-users? Clarifying motivations and assumptions in the quest for superintelligence
Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with unclear meanings include intelligence, embodiment, simulation, mind, consciousness, perception, value, goal, agent, knowledge, belief, optimality, friendliness, containment, machine and thinking. Much of this vocabulary is naively borrowed from the realm of conscious human experience to apply to a theoretical notion of âmind-in-generalâ based on computation. However, if there is indeed a threshold between mechanical tool and autonomous agent (and a tipping point for singularity), projecting human conscious-level notions into the operations of computers creates confusion and makes it harder to identify the nature and location of that threshold. There is confusion, in particular, about howâand even whetherâvarious capabilities deemed intelligent relate to human consciousness. This suggests that insufficient thought has been given to very fundamental conceptsâa dangerous state of affairs, given the intrinsic power of the technology. It also suggests that research in the area of artificial general intelligence may unwittingly be (mis)guided by unconscious motivations and assumptions. While it might be inconsequential if philosophers get it wrong (or fail to agree on what is right), it could be devastating if AI developers, corporations, and governments follow suit. It therefore seems worthwhile to try to clarify some fundamental notions
Translating Learning into Numbers: A Generic Framework for Learning Analytics
With the increase in available educational data, it is expected that Learning Analytics will become a powerful means to inform and support learners, teachers and their institutions in better understanding and predicting personal learning needs and performance. However, the processes and requirements behind the beneficial application of Learning and Knowledge Analytics as well as the consequences for learning and teaching are still far from being understood. In this paper, we explore the key dimensions of Learning Analytics (LA), the critical problem zones, and some potential dangers to the beneficial exploitation of educational data. We propose and discuss a generic design framework that can act as a useful guide for setting up Learning Analytics services in support of educational practice and learner guidance, in quality assurance, curriculum development, and in improving teacher effectiveness and efficiency. Furthermore, the presented article intends to inform about soft barriers and limitations of Learning Analytics. We identify the required skills and competences that make meaningful use of Learning Analytics data possible to overcome gaps in interpretation literacy among educational stakeholders. We also discuss privacy and ethical issues and suggest ways in which these issues can be addressed through policy guidelines and best practice examples
Habitual Ethics?
What if data-intensive technologiesâ ability to mould habits with unprecedented precision is also capable of triggering some mass disability of profound consequences? What if we become incapable of modifying the deeply-rooted habits that stem from our increased technological dependence? On an impoverished understanding of habit, the above questions are easily shrugged off. Habits are deemed rigid by definition: âas long as our deliberative selves remain capable of steering the design of data-intensive technologies, weâll be fineâ. To question this assumption, this open access book first articulates the way in which the habitual stretches all the way from unconscious tics to purposive, intentionally acquired habits. It also highlights the extent to which our habit-reliant, pre-reflective intelligence normally supports our deliberative selves. It is when habit rigidification sets in that this complementarity breaks down. The book moves from a philosophical inquiry into the âdouble edgeâ of habit â its empowering and compromising sides â to consideration of individual and collective strategies to keep habits at the service of our ethical life. Allowing the norms that structure our forms of life to be cotton-wooled in abstract reasoning is but one of the factors that can compromise ongoing social and moral transformations. Systems designed to simplify our practical reasoning can also make us âsheep-likeâ. Drawing a parallel between the moral risk inherent in both legal and algorithmic systems, the book concludes with concrete interventions designed to revive the scope for normative experimentation. It will appeal to any reader concerned with our retaining an ability to trigger change within the practices that shape our ethical sensibility. The eBook editions of this book are available open access under a CC BY-NC-ND 4.0 licence on bloomsburycollections.com. Open access was funded by the Mozilla Foundation
Habitual Ethics?
What if data-intensive technologiesâ ability to mould habits with unprecedented precision is also capable of triggering some mass disability of profound consequences? What if we become incapable of modifying the deeply-rooted habits that stem from our increased technological dependence? On an impoverished understanding of habit, the above questions are easily shrugged off. Habits are deemed rigid by definition: âas long as our deliberative selves remain capable of steering the design of data-intensive technologies, weâll be fineâ. To question this assumption, this open access book first articulates the way in which the habitual stretches all the way from unconscious tics to purposive, intentionally acquired habits. It also highlights the extent to which our habit-reliant, pre-reflective intelligence normally supports our deliberative selves. It is when habit rigidification sets in that this complementarity breaks down. The book moves from a philosophical inquiry into the âdouble edgeâ of habit â its empowering and compromising sides â to consideration of individual and collective strategies to keep habits at the service of our ethical life. Allowing the norms that structure our forms of life to be cotton-wooled in abstract reasoning is but one of the factors that can compromise ongoing social and moral transformations. Systems designed to simplify our practical reasoning can also make us âsheep-likeâ. Drawing a parallel between the moral risk inherent in both legal and algorithmic systems, the book concludes with concrete interventions designed to revive the scope for normative experimentation. It will appeal to any reader concerned with our retaining an ability to trigger change within the practices that shape our ethical sensibility. The eBook editions of this book are available open access under a CC BY-NC-ND 4.0 licence on bloomsburycollections.com. Open access was funded by the Mozilla Foundation
Critical Programming: Toward a Philosophy of Computing
Beliefs about the relationship between human beings and computing machines and their destinies have alternated from heroic counterparts to conspirators of automated genocide, from apocalyptic extinction events to evolutionary cyborg convergences. Many fear that people are losing key intellectual and social abilities as tasks are offloaded to the everywhere of the built environment, which is developing a mind of its own. If digital technologies have contributed to forming a dumbest generation and ushering in a robotic moment, we all have a stake in addressing this collective intelligence problem. While digital humanities continue to flourish and introduce new uses for computer technologies, the basic modes of philosophical inquiry remain in the grip of print media, and default philosophies of computing prevail, or experimental ones propagate false hopes. I cast this as-is situation as the post-postmodern network dividual cyborg, recognizing that the rational enlightenment of modernism and regressive subjectivity of postmodernism now operate in an empire of extended mind cybernetics combined with techno-capitalist networks forming societies of control. Recent critical theorists identify a justificatory scheme foregrounding participation in projects, valorizing social network linkages over heroic individualism, and commending flexibility and adaptability through life long learning over stable career paths. It seems to reify one possible, contingent configuration of global capitalism as if it was the reflection of a deterministic evolution of commingled technogenesis and synaptogenesis. To counter this trend I offer a theoretical framework to focus on the phenomenology of software and code, joining social critiques with textuality and media studies, the former proposing that theory be done through practice, and the latter seeking to understand their schematism of perceptibility by taking into account engineering techniques like time axis manipulation. The social construction of technology makes additional theoretical contributions dispelling closed world, deterministic historical narratives and requiring voices be given to the engineers and technologists that best know their subject area. This theoretical slate has been recently deployed to produce rich histories of computing, networking, and software, inform the nascent disciplines of software studies and code studies, as well as guide ethnographers of software development communities. I call my syncretism of these approaches the procedural rhetoric of diachrony in synchrony, recognizing that multiple explanatory layers operating in their individual temporal and physical orders of magnitude simultaneously undergird post-postmodern network phenomena. Its touchstone is that the human-machine situation is best contemplated by doing, which as a methodology for digital humanities research I call critical programming. Philosophers of computing explore working code places by designing, coding, and executing complex software projects as an integral part of their intellectual activity, reflecting on how developing theoretical understanding necessitates iterative development of code as it does other texts, and how resolving coding dilemmas may clarify or modify provisional theories as our minds struggle to intuit the alien temporalities of machine processes
Self-knowledge through self-tracking devices: design guidelines for usability and a socio-technical examination from posthumanity perspective
The Digital Era introduces emerging product categories that have evolved around certain habits and concepts. One tendency in the Information Age is recording and storing quantitative and qualitative data based on an individual's life by using ubiquitous computing devices. Such products, bringing self-observation and autobiographical memory capabilities to an extreme level, have the potential to morph human beings by augmenting and altering their self-understanding through presenting previously nonexistent information regarding their lives. The diversity found in this product range is increasing parallel to the growing demand. However, the meaning of these products for human life is rarely discussed. It remains a question whether these personal logs lead to an enriched self-knowledge for their users or not. This thesis aims to investigate the design principles and the influences of self-tracking products and services on daily life within a socio-technical framework in order to establish a connection between selftracking by ubiquitous computing devices and the notion of self-concept
ì ì 걎ê°ìì ìŹì©ì ëŽëŹí°ëžì ììì±ì°°ì ì§ìíë ëíí ììŽì íž ëììž
íìë
ŒëŹž (ë°ìŹ) -- ììžëíê” ëíì : ì”í©êłŒíêž°ì ëíì ì”í©êłŒíë¶(ëì§ížì 볎ì”í©ì êł”), 2020. 8. ìëŽì.In the advent of artificial intelligence (AI), we are surrounded by technological gadgets, devices and intelligent personal assistant (IPAs) that voluntarily take care of our home, work and social networks. They help us manage our life for the better, or at least that is what they are designed for. As a matter of fact, few are, however, designed to help us grapple with the thoughts and feelings that often construct our living. In other words, technologies hardly help us think. How can they be designed to help us reflect on ourselves for the better?
In the simplest terms, self-reflection refers to thinking deeply about oneself. When we think deeply about ourselves, there can be both positive and negative consequences. On the one hand, reflecting on ourselves can lead to a better self-understanding, helping us achieve life goals. On the other hand, we may fall into brooding and depression. The sad news is that the two are usually intertwined. The problem, then, is the irony that reflecting on oneself by oneself is not easy.
To tackle this problem, this work aims to design technology in the form of a conversational agent, or a chatbot, to encourage a positive self-reflection. Chatbots are natural language interfaces that interact with users in text. They work at the tip of our hands as if SMS or instant messaging, from flight reservation and online shopping to news service and healthcare. There are even chatbot therapists offering psychotherapy on mobile. That machines can now talk to us creates an opportunity for designing a natural interaction that used to be humans own.
This work constructs a two-dimensional design space for translating self-reflection into a human-chatbot interaction, with user self-disclosure and chatbot guidance. Users confess their thoughts and feelings to the bot, and the bot is to guide them in the scaffolding process. Previous work has established an extensive line of research on the therapeutic effect of emotional disclosure. In HCI, reflection design has posited the need for guidance, e.g. scaffolding users thoughts, rather than assuming their ability to reflect in a constructive manner.
The design space illustrates different reflection processes depending on the levels of user disclosure and bot guidance. Existing reflection technologies have most commonly provided minimal levels of disclosure and guidance, and healthcare technologies the opposite. It is the aim of this work to investigate the less explored space by designing chatbots called Bonobot and Diarybot. Bonobot differentiates itself from other bot interventions in that it only motivates the idea of change rather than direct engagement. Diarybot is designed in two chat versions, Basic and Responsive, which create novel interactions for reflecting on a difficult life experience by explaining it to and exploring it with a chatbot. These chatbots are set up for a user study with 30 participants, to investigate the user experiences of and responses to design strategies. Based on the findings, challenges and opportunities from designing for chatbot-guided reflection are explored.
The findings of this study are as follows. First, participants preferred Bonobots questions that prompted the idea of change. Its responses were also appreciated, but only when they conveyed accurate empathy. Thus questions, coupled with empathetic responses, could serve as a catalyst for disclosure and even a possible change of behavior, a motivational boost. Yet the chatbot-led interaction led to surged user expectations for the bot. Participants demanded more than just the guidance, such as solutions and even superhuman intelligence. Potential tradeoff between user engagement and autonomy in designing human-AI partnership is discussed.
Unlike Bonobot, Diarybot was designed with less guidance to encourage users own narrative making. In both Diarybot chats, the presence of a bot could make it easier for participants to share the most difficult life experiences, compared to a no-chatbot writing condition. Yet an increased interaction with the bot in Responsive chat could lead to a better user engagement. On the contrary, more emotional expressiveness and ease of writing were observed with little interaction in Basic chat. Coupled with qualitative findings that reveal user preference for varied interactions and tendency to adapt to bot patterns, predictability and transparency of designing chatbot interaction are discussed in terms of managing user expectations in human-AI interaction.
In sum, the findings of this study shed light on designing human-AI interaction. Chatbots can be a potential means of supporting guided disclosure on lifes most difficult experiences. Yet the interaction between a machine algorithm and an innate human cognition bears interesting questions for the HCI community, especially in terms of user autonomy, interface predictability, and design transparency. Discussing the notion of algorithmic affordances in AI agents, this work proposes meaning-making as novel interaction design metaphor: In the symbolic interaction via language, AI nudges users, which inspires and engages users in their pursuit of making sense of lifes agony. Not only does this metaphor respect user autonomy but also it maintains the veiled workings of AI from users for continued engagement.
This work makes the following contributions. First, it designed and implemented chatbots that can provide guidance to encourage user narratives in self-reflection. Next, it offers empirical evidence on chatbot-guided disclosure and discusses implications for tensions and challenges in design. Finally, this work proposes meaning-making as a novel design metaphor. It calls for the responsible design of intelligent interfaces for positive reflection in pursuit of psychological wellbeing, highlighting algorithmic affordances and interpretive process of human-AI interaction.ì”ê·Œ ìžêł”ì§ë„(Artificial Intelligence; AI) êž°ì ì ì°ëŠŹ ì¶ì ë©Žë©Žì ë§€ì° ëč ë„ŽêČ ë°êżëêł ìë€. íčí ì íì ì늏(Siri)ì ê”Źêž ìŽìì€íŽíž (Google Assistant) ë± ìì°ìŽ ìží°íìŽì€(natural language interfaces)ì íì„ì êł§ ìžêł”ì§ë„ ììŽì ížìì ëíê° ìží°ëì
ì ìŁŒì ìëšìŽ ë êČìì ë„í ì§ììŒ íë€. ì€ì ìžêł”ì§ë„ ììŽì ížë ì€ìíìì ìœí
ìž ì¶ìČêłŒ ìšëŒìž ìŒí ë± ë€ìí ìëčì€ë„Œ ì êł”íêł ìì§ë§, ìŽë€ì ëë¶ë¶ì êłŒì
-ì§í„ì ìŽë€. ìŠ ìžêł”ì§ë„ì ì°ëŠŹì ì¶ì ížëŠŹíêČ íì§ë§, êłŒì° ížìíêČ í ì ìëê°? ëłž ì°ê”Źë ížíì§ë§ ížíì§ ìì íëìžì ìí êž°ì ì ìí ì êł ëŻŒíë ë°ìì ì¶ë°íë€.
ììì±ì°°(self-reflection), ìŠ ìì ì ëíŽ êčìŽ ìê°íŽ 볎ë íëì ìêž°ìžìêłŒ ìêž°ìŽíŽë„Œ ëëȘšíêł ë°°ìêłŒ ëȘ©íììì êł ì·šíë ë± ë¶ìŒë„Œ ë§ëĄ íêł ë늏 ì°ê”Ź ë° ì ì©ëìŽ ìë€. íì§ë§ ììì±ì°°ì ê°ì„ í° ìŽë €ìì ì€ì€ëĄ 걎ì€ì ìž ì±ì°°ì ëëȘšíêž° íë€ë€ë êČìŽë€. íčí, ë¶ì ì ìž ê°ì ì êČœíì ëí ììì±ì°°ì ìą
ìą
ì°ìžê°êłŒ ë¶ìì ëë°íë€. ê·čëł”ìŽ íë êČœì° ìëŽ ëë ìčëŁë„Œ ì°Ÿì ì ìì§ë§, ìŹíì ëìžêłŒ ìŁëì ë¶ëŽê°ìŒëĄ êșŒë €ì§ë êČœì°ê° ë€ììŽë€.
ì±ì°° ëììž(Reflection Design)ì ìžê°-컎íší°ìížìì©(HCI)ì ì€ë íëëĄ, ê·žëì íšêłŒì ìž ì±ì°°ì ëìž ì ìë ëììž ì ë”ë€ìŽ ë€ì ì°ê”ŹëìŽ ìì§ë§ ëë¶ë¶ ë€ìí ìŹì©ì ë°ìŽí° ìì§ ì ë”ì í”íŽ êłŒê±° íì ë° íŽìì ëë ë° ê·žìł€ë€. ì”ê·Œ ìì ì±ëŽ ìëŽìŹê° ë±ì„íìŹ ìŹëŠŹìëŽêłŒ ìčëŁ ë¶ìŒì ì ì©ëêł ìì§ë§, ìŽ ëí ì±ì°°ì ëêž°ëłŽë€ë íšìšì ìž ìČìč ëê”Źì ëšžëŹŽë„Žêł ìì ëżìŽë€. ìŠ êž°ì ì ìčëŁ ìëšìŽê±°ë ì±ì°°ì ëììŽ ëì§ë§, ê·ž êłŒì ì ê°ì
íë êČœì°ë ì íì ìŽëŒêł í ì ìë€.
ìŽì ëłž ì°ê”Źë ì±ì°° ëë°ìëĄì ëíí ììŽì ížìž ì±ëŽì ëììží êČì ì ìíë€. ìŽ ì±ëŽì ìí ì ìŹì©ìì ë¶ì ì ìž ê°ì ì êČœí ëë ížëŒì°ë§ì ëíŽ ìŽìŒêž°í ì ìëëĄ ëìž ëż ìëëŒ, ê·ž êłŒì ìì ë°ì¶ë„Œ í”ì íìŹ ê±Žì€ì ìž ëŽëŹí°ëžë„Œ ìŽëìŽ ëŽë ê°ìŽëë„Œ ì êł”íë êČìŽë€. ìŽëŹí ì±ëŽì ì€êłíêž° ìíŽ, ì í ì°ê”Źë„Œ êž°ë°ìŒëĄ ìŹì©ìì ìêž°ë
žì¶(user self-disclosure)êłŒ ì±ëŽ ê°ìŽë(guidance)ë„Œ ë ì¶ìŒëĄ í ëììž êł”ê°(design space)ì ì ìíìë€. ê·žëŠŹêł ìêž°ë
žì¶êłŒ ê°ìŽëì ì ëì ë°ë„ž ë€ ê°ì§ ììì±ì°° êČœíì ë¶ë„íìë€: ìêž°ë
žì¶êłŒ ê°ìŽëê° ì”ìíë íì êł”ê°, ìêž°ë
žì¶ìŽ ììŁŒìŽêł ê°ìŽëê° ì”ìíë ì€ëȘ
êł”ê°, ìêž°ë
žì¶êłŒ ì±ëŽìŽ ìŽëë ê°ìŽëê° íŒí©ë íì êł”ê°, ê°ìŽëë„Œ ì ê·č ê°ì
ììŒ ìêž°ë
žì¶ì ëìŽë ëłí êł”ê°ìŽ ê·žêČìŽë€.
ëłž ì°ê”Źì ëȘ©íë ìì ë ëììž êł”ê°ììì ì±ì°° êČœíêłŒ êłŒì ì ëë ì±ëŽì ê”Źííêł , ìŹì©ì ì€íì í”íŽ ì±ì°° êČœíêłŒ ëììž ì ë”ì ëí ë°ìì ìì§ ë° ë¶ìíšìŒëĄìš ì±ëŽ êž°ë°ì ìì ì±ì°° ìží°ëì
ì ìëĄêČ ì ìíêł ìŽì ëí ì€ìŠì 귌거넌 ë§ë šíë êČìŽë€. íìŹêčì§ ë§ì ì±ì°° êž°ì ì íìì ì§ì€ëìŽ ìêž°ì, ëëšžì§ ìž êł”ê°ììì ì±ì°°ì ì§ìíë 볎ë
žëŽêłŒ êž°ëłžíë°ìí ìŒêž°ëŽì ëììžíìë€. ëí, ìŹì©ì íê°ë„Œ ë°íìŒëĄ ëì¶í ì°ê”ŹêČ°êłŒë„Œ í”íŽ ëëí ìžê°-ìžêł”ì§ë„ ìížìì©(human-AI interaction)ì 맄ëœìì ì±ì°° ëë°ìëĄìì ì±ëŽ êž°ì ìŽ ê°ë ì믞ì ìí ì íê”Źíë€.
볎ë
žëŽêłŒ ìŒêž°ëŽì ìžê°ì€ìŹìëŽêłŒ ëíë¶ìì ìŽëĄ ì 귌거넌 ë°íìŒëĄ í ì ìì§ë„(emotional intelligence)êłŒ ì ì°šì§ë„(proecedural intelligence)ì í”ìŹ ì¶ìŒëĄ, ëí íëŠ ì ìŽ(flow manager)ì ë°í ìì±(response generator)ì í”ìŹ ëȘšëëĄ ê”Źííìë€. 뚌ì , 볎ë
žëŽì ëêž°ê°íìëŽ(motivational interviewing)ì êž°ë°ìŒëĄ êł ëŻŒêłŒ ì€ížë ì€ì ëí ëŽëŹí°ëžë„Œ ìŽëìŽëŽìŽ, ìŽì ëí íŽêČ°ì ìí ê°ìŽë ì§ëŹžì í”íŽ ëłíë„Œ ìí ì±ì°°ì ëëë€. ì±ëŽì ê”Źíì ìíŽ, ëêž°ê°íìëŽì ë€ ëšêł ëíë„Œ ì€ì íêł ê° ëšêłë„Œ ê”Źì±í ì ìë ìëŽìŹ ë°í íëì êŽë šëŹžíìì ìì§ ë° ì ìČ늏 êłŒì ì ê±°ìł ì€íŹëŠœížííìë€. ëí, ìŹì ì ìČ늏ë 돞ì„ìŽ ë§„ëœì ì ì§í ì ìë ëíì ì°ìŒ ì ìëëĄ, ëíì ìŁŒì ë ëíììì ìŽë €ììŒëĄ íì íìë€.
볎ë
žëŽêłŒì ëíê° ìŹì©ìì ì±ì°°ì 믞ìčë ìí„êłŒ ìŽì ëí ìžìì íìíêž° ìíŽ ì§ì ì°ê”Źë°©ëČì ìŹì©íìŹ 30ëȘ
ì ëíììêłŒ ìŹì©ì ì€íì ì§ííìë€. ì€íêČ°êłŒ, ìŹì©ìë ëłí ëíë„Œ ì ëí ì ìë ë€ìí íì ì§ëŹžì ì ížíìë€. ëí, ìŹì©ìì 맄ëœì ì íí ë€ìŽë§ë ì§ëŹžêłŒ íŒëë°±ì ìŹì©ìë„Œ ëì± ì ê·čì ìž ìêž° ë
žì¶ëĄ ìŽëêČ í ì ììì ë°êČŹíìë€. ê·žëŹë ì±ëŽìŽ ë§ìč ìëŽìŹìČëŒ ëíë„Œ ìŽëìŽê° êČœì°, ëìì§ ìŹì©ìì êž°ë ìì€ìŒëĄ ìžíŽ ìŒë¶ ìŹì©ìê° ëłíì ëí ëêž°ë„Œ íì¶íìììë ë¶ê”Źíêł ëłíì ëí ììšì±ì ì±ëŽì ìëíë €ë ëȘšì” ëí ëíëšì ë¶ìíìë€.
볎ë
žëŽ ì°ê”Źë„Œ ë°íìŒëĄ ìŒêž°ëŽì ì±ëŽ ëì ìŹì©ìê° ëłŽë€ ì ê·čì ìŒëĄ ì±ì°° ëŽëŹí°ëžë„Œ ì ê°í ì ìëëĄ ëììžíìë€. ìŒêž°ëŽì ížëŒì°ë§ì ëí ííì êžì°êž°ë„Œ ì§ìíë ì±ëŽìŒëĄ, êž°ëłží ëë ë°ìí ëíë„Œ ì êł”íë€. êž°ëłží ëíë ížëŒì°ë§ì ëíŽ ìì ëĄêČ ì€ëȘ
í ì ìë ëí íêČœì ì êł”íêł , ë°ìí ëíë ìŹì©ìê° ìì±í ëŽëŹí°ëžì ëí íì ìží°ëì
ì í”íŽ êłŒê±°ì êČœíì ìŹíìíëëĄ íìë€. ëí, íì ìží°ëì
ì ë°í íëì ë€ìí ìëŽìčëŁìì ë°ì·íë ì ì ì ëŽëŹí°ëžìì ì¶ì¶í ê°ì ìŽ ë° ìžê°êŽêł í€ìëë„Œ íì©íëëĄ íìë€.
ê° ìŒêž°ëŽì ëí ë°ìì ëčê” ë¶ìíêž° ìíŽ, ì±ëŽ ììŽ ëí뚌ížì ííì êžì°êž° íëë§ì íë ëìĄ°ê”°ì ì€ì íêł 30ëȘ
ì ìŹì©ìë„Œ ëȘšì§íìŹ ê° ìĄ°ê±Žì ëë€ìŒëĄ ë°°ì , ì€ëŹžêłŒ ë©ŽëŽì ëë°í 4ìŒê°ì êžì°êž° ì€íì ì§ííìë€. ì€íêČ°êłŒ, ìŹì©ìë ìŒêž°ëŽêłŒì ìží°ëì
ì í”íŽ ëłŽìŽì§ ìë ê°ìì ìČìë„Œ ììíšìŒëĄìš êžì°êž°ë„Œ ëí íëìŒëĄ ìžì§íêł ììì ì ì ììë€. íčí, ë°ìí ëíì íì ì§ëŹžë€ì ìŹì©ìëĄ íìŹêž ìí©ì ê°êŽííêł ìëĄìŽ êŽì ìŒëĄ ìê°íŽ ëłŒ ì ìë íšêłŒë„Œ ê±°ëìë€. ë°ìí ëíìì íì ìží°ëì
ì êČœíí ìŹì©ìë ìŒêž°ëŽì ìžì§ë ìŠê±°ìêłŒ ìŹíì±, ì ëą°ëì ìŹìŹì© ìí„ì ëí íê°ê° ë€ë„ž ë ìĄ°ê±ŽììëłŽë€ ì ìíêČ ëìë€. ë°ë©Ž, êž°ëłží ëí ì°žìŹìë ë€ë„ž ë ìĄ°ê±ŽììëłŽë€ ê°ì ì ííì ì©ìŽì±êłŒ êžì°êž°ì ìŽë €ìì ê°ê° ì ìíêČ ëêČ, ê·žëŠŹêł ëźêČ íê°íìë€. ìŠ, ì±ëŽì ë§ì ìží°ëì
ììŽë ìČìì ìí ì ìíí ì ììì§ë§, íì ì§ëŹžì í”í ìží°ëì
ìŽ ê°ë„íë ë°ìí ëíë ëì± ì ê·čì ìž ì ì ì°žìŹ(engagement)ë„Œ ìŽëìŽëŒ ì ììë€. ëí, ì€íìŽ ì§íëšì ë°ëŒ, ìŹì©ìê° ë°ìí ìŒêž°ëŽì ìêł ëŠŹìŠì ìì ì êžì°êž° ìŁŒì ì ëšìŽ ì í ë±ì ë§êČ ë°êŸžìŽ ê°ë ì ìì (adaptive) íëìŽ êŽì°°ëìë€.
ìì ì°ê”ŹêČ°êłŒë„Œ í”íŽ, ë€ìí ì±ëŽ ëììž ì ë”ì ë°íìŒëĄ ìŹì©ìì ëŽëŹí°ëžê° ë€ë„ŽêČ ì ëë ì ììŒë©°, ë°ëŒì ìëĄ ë€ë„ž ì íì ì±ì°° êČœíì ìŽëìŽëŒ ì ììì ë°êČŹíìë€. ëí, ììšì ìž íììž ììì±ì°°ìŽ êž°ì êłŒì ìížìì©ìŒëĄ ížíì ì±ì§ì ê°êČ ë ë ìŹì©ìì ììšì±, ìížìì©ì ììžĄê°ë„ì±êłŒ ëììž íŹëȘ
ì±ìì ë°ìí ì ìë ê°ë±êŽêł(tensions)ë„Œ íìíêł ìžêł”ì§ë„ ììŽì ížì ìêł ëŠŹìŠ ìŽíŹëì€(algorithmic affordances)ë„Œ ë
Œìíìë€.
볎ìŽì§ ìë ì±ëŽ ìêł ëŠŹìŠì ìíŽ ìŹì©ìì ì±ì°°ìŽ ì ëë ì ìë€ë êČì êž°ìĄŽì ìžê°-컎íší° ìížìì©ìì ê°ìĄ°ëë ìŹì©ì ì ìŽì ëììž íŹëȘ
ì±ìì ì ëł”ì ìŽëíë êČìČëŒ ëłŽìŒ ì ììŒë, ìì§ì ìížìì©(symbolic interaction)ì 맄ëœìì ì€íë € ìŹì©ìê° ìêł ëŠŹìŠì ìíŽ ì§ëê° êłŒê±°ì ëí ìëĄìŽ ìëŻžë„Œ ì ê·č íìíŽëê°ë êłŒì ìŽ ë ì ìë€. ëłž ì°ê”Źë ìŽêČì ìëĄìŽ ëììž ë©ííŹ, ìŠ ì믞-ë§ë€êž°(meaning-making)ëĄ ì ìíêł ìêł ëŠŹìŠì ëì§(nudge)ì ìí ìŹì©ìì ìŁŒêŽì íŽì êČœí(interpretive process)ì ê°ìĄ°íë€. ìŽêČì íëì ì±ëŽ ìêł ëŠŹìŠìŽëŒ í ì§ëŒë ìëĄ ë€ë„ž ìŹì©ìì ë€ìí ì±ì°° êČœíì ì ëíŽëŒ ì ìë€ë êČì ì믞íë©°, ìŽëŹí 맄ëœìì ìžêł”ì§ë„ì êž°ìĄŽì ëžë ë°ì€ë„Œ ì ì§íë©Žìë ìŹì©ìì ììšì±ì 볎ì„í ì ìë€.
ëłž ì°ê”Źë ì°ëŠŹì íì
íë ìžêł”ì§ë„ ì±ëŽ êž°ì ì ëììžì ëí êČœíì ìŽíŽë„Œ ëìŽêł , ìŽëĄ ì êž°ë°ìŒëĄ í ì±ëŽì ê”ŹííšìŒëĄìš ëììž ì ë”ì ëí ì€ìŠì 귌거넌 ì ìíë€. ëí ìì ì±ì°° êłŒì ì ëííë ëë°ì(companion)ëĄìì êž°ì ëĄ ìëĄìŽ ëììž ë©ííŹë„Œ ì ìíšìŒëĄìš ìžê°ì»Žíší°ìížìì©(HCI)ì ìŽëĄ ì íì„ì êž°ìŹíêł , ìŹì©ìì ë¶ì ì êČœíì ëí ì믞 ì¶ê”Źë„Œ ëë êŽêłì§í„ì ìžêł”ì§ë„ìŒëĄì í„í íëìžì ì ì 걎ê°ì ìŽë°ì§í ì ìë ìŹíì , ì°ì
ì ììë„Œ ê°ëë€.CHAPTER 1. INTRODUCTION ïŒ
1.1. BACKGROUND AND MOTIVATION ïŒ
1.2. RESEARCH GOAL AND QUESTIONS ïŒ
1.2.1. Research Goal ïŒ
1.2.2. Research Questions ïŒ
1.3. MAJOR CONTRIBUTIONS ïŒ
1.4. THESIS OVERVIEW ïŒ
CHAPTER 2. LITERATURE REVIEW ïŒïŒ
2.1. THE REFLECTING SELF ïŒïŒ
2.1.1. Self-Reflection and Mental Wellbeing ïŒïŒ
2.1.2. The Self in Reflective Practice ïŒïŒ
2.1.3. Design Space ïŒïŒ
2.2. SELF-REFLECTION IN HCI ïŒïŒ
2.2.1. Reflection Design in HCI ïŒïŒ
2.2.2. HCI for Mental Wellbeing ïŒïŒ
2.2.3. Design Opportunities ïŒïŒ
2.3. CONVERSATIONAL AGENT DESIGN ïŒïŒ
2.3.1. Theoretical Background ïŒïŒ
2.3.2. Technical Background ïŒïŒ
2.3.3. Design Strategies ïŒïŒ
2.4. SUMMARY ïŒïŒ
CHAPTER 3. DESIGNING CHATBOT FOR TRANSFORMATIVE REFLECTION ïŒïŒ
3.1. DESIGN GOAL AND DECISIONS ïŒïŒ
3.2. CHATBOT IMPLEMENTATION ïŒïŒ
3.2.1. Emotional Intelligence ïŒïŒ
3.2.2. Procedural Intelligence ïŒïŒ
3.3. EXPERIMENTAL USER STUDY ïŒïŒ
3.3.1. Participants ïŒïŒ
3.3.2. Task ïŒïŒ
3.3.3. Procedure ïŒïŒ
3.3.4. Ethics Approval ïŒïŒ
3.3.5. Surveys and Interview ïŒïŒ
3.4. RESULTS ïŒïŒ
3.4.1. Survey Findings ïŒïŒ
3.4.2. Qualitative Findings ïŒïŒ
3.5. IMPLICATIONS ïŒïŒ
3.5.1. Articulating Hopes and Fears ïŒïŒ
3.5.2. Designing for Guidance ïŒïŒ
3.5.3. Rethinking Autonomy ïŒïŒ
3.6. SUMMARY ïŒïŒ
CHAPTER 4. DESIGNING CHATBOTS FOR EXPLAINING AND EXPLORING REFLECTIONS ïŒïŒ
4.1. DESIGN GOAL AND DECISIONS ïŒïŒ
4.1.1. Design Decisions for Basic Chat ïŒïŒ
4.1.2. Design Decisions for Responsive Chat ïŒïŒ
4.2. CHATBOT IMPLEMENTATION ïŒïŒïŒ
4.2.1. Emotional Intelligence ïŒïŒïŒ
4.2.2. Procedural Intelligence ïŒïŒïŒ
4.3. EXPERIMENTAL USER STUDY ïŒïŒïŒ
4.3.1. Participants ïŒïŒïŒ
4.3.2. Task ïŒïŒïŒ
4.3.3. Procedure ïŒïŒïŒ
4.3.4. Safeguarding of Study Participants and Ethics Approval ïŒïŒïŒ
4.3.5. Surveys and Interviews ïŒïŒïŒ
4.4. RESULTS ïŒïŒïŒ
4.4.1. Quantitative Findings ïŒïŒïŒ
4.4.2. Qualitative Findings ïŒïŒïŒ
4.5. IMPLICATIONS ïŒïŒïŒ
4.5.1. Telling Stories to a Chatbot ïŒïŒïŒ
4.5.2. Designing for Disclosure ïŒïŒïŒ
4.5.3. Rethinking Predictability and Transparency ïŒïŒïŒ
4.6. SUMMARY ïŒïŒïŒ
CHAPTER 5. DESIGNING CHATBOTS FOR SELF-REFLECTION: SUPPORTING GUIDED DISCLOSURE ïŒïŒïŒ
5.1. DESIGNING FOR GUIDED DISCLOSURE ïŒïŒïŒ
5.1.1. Chatbots as Virtual Confidante ïŒïŒïŒ
5.1.2. Routine and Variety in Interaction ïŒïŒïŒ
5.1.3. Reflection as Continued Experience ïŒïŒïŒ
5.2. TENSIONS IN DESIGN ïŒïŒïŒ
5.2.1. Adaptivity ïŒïŒïŒ
5.2.2. Autonomy ïŒïŒïŒ
5.2.3. Algorithmic Affordance ïŒïŒïŒ
5.3. MEANING-MAKING AS DESIGN METAPHOR ïŒïŒïŒ
5.3.1. Meaning in Reflection ïŒïŒïŒ
5.3.2. Meaning-Making as Interaction ïŒïŒïŒ
5.3.3. Making Meanings with AI ïŒïŒïŒ
CHAPTER 6. CONCLUSION ïŒïŒïŒ
6.1. RESEARCH SUMMARY ïŒïŒïŒ
6.2. LIMITATIONS AND FUTURE WORK ïŒïŒïŒ
6.3. FINAL REMARKS ïŒïŒïŒ
BIBLIOGRAPHY ïŒïŒïŒ
ABSTRACT IN KOREAN ïŒïŒïŒDocto
Data Epistemologies / Surveillance and Uncertainty
Data Epistemologies studies the changing ways in which âknowledgeâ is defined, promised, problematised, legitimated vis-ĂĄ-vis the advent of digital, âbigâ data surveillance technologies in early twenty-first century America. As part of the periodâs fascination with ânewâ media and âbigâ data, such technologies intersect ambitious claims to better knowledge with a problematisation of uncertainty. This entanglement, I argue, results in contextual reconfigurations of what âcountsâ as knowledge and who (or what) is granted authority to produce it â whether it involves proving that indiscriminate domestic surveillance prevents terrorist attacks, to arguing that machinic sensors can know us better than we can ever know ourselves.
The present work focuses on two empirical cases. The first is the âSnowden Affairâ (2013-Present): the public controversy unleashed through the leakage of vast quantities of secret material on the electronic surveillance practices of the U.S. government. The second is the âQuantified Selfâ (2007-Present), a name which describes both an international community of experimenters and the wider industry built up around the use of data-driven surveillance technology for self-tracking every possible aspect of the individual âselfâ. By triangulating media coverage, connoisseur communities, advertising discourse and leaked material, I examine how surveillance technologies were presented for public debate and speculation.
This dissertation is thus a critical diagnosis of the contemporary faith in ârawâ data, sensing machines and algorithmic decision-making, and of their public promotion as the next great leap towards objective knowledge. Surveillance is not only a means of totalitarian control or a technology for objective knowledge, but a collective fantasy that seeks to mobilise public support for new epistemic systems. Surveillance, as part of a broader enthusiasm for âdata-drivenâ societies, extends the old modern project whereby the human subject â its habits, its affects, its actions â become the ingredient, the raw material, the object, the target, for the production of truths and judgments about them by things other than themselves
THE VARIETIES OF USER EXPERIENCE BRIDGING EMBODIED METHODOLOGIES FROM SOMATICS AND PERFORMANCE TO HUMAN COMPUTER INTERACTION
Embodied Interaction continues to gain significance within the field of Human
Computer Interaction (HCI). Its growing recognition and value is evidenced in part by
a remarkable increase in systems design and publication focusing on various aspects of
Embodiment. The enduring need to interact through experience has spawned a variety
of interdisciplinary bridging strategies in the hope of gaining deeper understanding of
human experience. Along with phenomenology, cognitive science, psychology and the
arts, recent interdisciplinary contributions to HCI include the knowledge-rich domains
of Somatics and Performance that carry long-standing traditions of embodied practice.
The common ground between HCI and the fields of Somatics and Performance is based
on the need to understand and model human experience. Yet, Somatics and
Performance differ from normative HCI in their epistemological frameworks of
embodiment. This is particularly evident in their histories of knowledge construction
and representation. The contributions of Somatics and Performance to the history of
embodiment are not yet fully understood within HCI. Differing epistemologies and their
resulting approaches to experience identify an under-theorized area of research and an
opportunity to develop a richer knowledge and practice base. This is examined by
comparing theories and practices of embodied experience between HCI and Somatics
(Performance) and analyzing influences, values and assumptions underlying
epistemological frameworks. The analysis results in a set of design strategies based in
embodied practices within Somatics and Performance. The subsequent application of
these strategies is examined through a series of interactive art installations that
employ embodied interaction as a central expression of technology. Case Studies
provide evidence in the form of rigorously documented design processes that illustrate
these strategies. This research exemplifies 'Research through Art' applied in the
context of experience design for tangible, wearable and social interaction
- âŠ