7 research outputs found

    Talking without a Voice: Virtual Co-Speakership in an Educational Webinar

    Get PDF
    The following paper analyzes the interactional shifts precipitated by the pandemic induced turn to telepresence. Using the framework of multimodal conversation analysis, we analyze a videorecording of a webinar organized by The Psychological Service of Moscow. In this specific case, webinar participants had unequally distributed interactional resources; only one participant was able to speak, while all other participants could only participate through a text-based chat. We focus on a change of the course of action where the instructor's monologic presentation transitions to a question-answer interaction. We highlight the way the single speaker organizes the transition from these structurally dissimilar participation frameworks. A key feature of the move from monologue to question-response is a self-initiated interruption: another participant's diachronic chat message is deployed as a synchronic overlap by orienting to a virtual second speaker. Thus, we document a case where one speaker chooses to give a voice to a voiceless participant. The work contributes to studies of educational interaction by providing insights on the work that goes into the transition between interactional formats in telemediated asymmetrical ecologies. Our work opens up discussions about the interfacing between different modalities as a locally emergent phenomenon, and how new interactional ecologies create a fertile substrate for hitherto unfamiliar forms of talking, embodiment, and local sequential ordering. The work thus also contributes to research that highlights the non-passive role of the 'listener', which is reflected in the active speaker's orientation to the listener's active contribution to ongoing talk

    Боги из машины: порядок взаимодействия в геймифицированном дистанционном обучении

    Get PDF
    The article analyzes the implementation of an online educational module and its impact on the organization of the classroom's interaction order. The latter is institutionally constrained by the presence of a goal and the distribution of roles between teacher and students. The introduction of a digital learning platform adds a technological context to the institutional setting. The article considers technologies as possessing communicative affordances - opportunities for action made possible or delimited through their use. Technologies bring new interactive resources to the process of education and can affect the organization of the classroom's interaction order. Using multimodal conversation analysis, we analyzed video recordings of the telemediated interaction of Russia-based students and teachers within a gamified online educational module. We investigate a case in which a student's correct answer is nevertheless corrected by the teacher. We demonstrate that the teacher initiates the correction because they are guided by the ordering of the game elements within the interface. Based on a detailed analysis of the teacher's mouse movement in relation to ongoing turns-at-talk, we show that this orientation is sustained by all participants. The work contributes to classroom interaction studies and affordance theory and develops the methodology of multimodal transcription for mediated contexts. The primary result of the study is an empirical demonstration that the relevance of technological affordances for interactants is situationally produced, and that this process is associated with the interweaving of the institutional and technical context of interaction. The conclusion discusses the relationship between affordances and institutional norms

    Как говорить без слов: переосмысляя материальность, агентность и коммуникативную компетентность в виртуальной реальности

    Get PDF
    While thinkers of the material turn offer new conceptual resources for talking about non-human ontologies, interaction researchers are trying to reassemble the social situation fragmented by telecommunication. Conversation analysts tend to see technical objects in their situation-constitutive role, but they can also disrupt the current projects of the participants whilst remaining "unseen and unnoticed" (e.g. Zoom delays). We propose a conceptualization of the relationship between the participant and the interaction environment as a source of agency, which makes it possible to preserve an emic perspective. We illustrate our thesis by analyzing a case study of interaction between a Deaf and a hearing participant in VRChat. In this case, virtual pencils that leave durable inscriptions in the air are used by participants for communication. We analyze a simple question-and-answer sequence and demonstrate that: the participants treat the inscriptions as material; the hearing participant is less capable of communicating in this space than the Deaf person; the answer to the question is produced jointly due to the instructional work of the Deaf participant. The results allow us to draw the following conclusions about the nature of materiality, agency and communicative competence: 1) the materiality of the environment is not a purely analytical category, but is constructed by the participants in the interaction; 2) the agency of the participants depends on the environment and at the same time has a distributed character; 3) communicative competence is not directly related to the "internal" characteristics of the agent, such as atypicality

    Forms of Understanding of XAI-Explanations

    Full text link
    Explainability has become an important topic in computer science and artificial intelligence, leading to a subfield called Explainable Artificial Intelligence (XAI). The goal of providing or seeking explanations is to achieve (better) 'understanding' on the part of the explainee. However, what it means to 'understand' is still not clearly defined, and the concept itself is rarely the subject of scientific investigation. This conceptual article aims to present a model of forms of understanding in the context of XAI and beyond. From an interdisciplinary perspective bringing together computer science, linguistics, sociology, and psychology, a definition of understanding and its forms, assessment, and dynamics during the process of giving everyday explanations are explored. Two types of understanding are considered as possible outcomes of explanations, namely enabledness, 'knowing how' to do or decide something, and comprehension, 'knowing that' -- both in different degrees (from shallow to deep). Explanations regularly start with shallow understanding in a specific domain and can lead to deep comprehension and enabledness of the explanandum, which we see as a prerequisite for human users to gain agency. In this process, the increase of comprehension and enabledness are highly interdependent. Against the background of this systematization, special challenges of understanding in XAI are discussed

    Halting the Decay of Talk

    No full text
    We investigate how people with atypical bodily capabilities interact within virtual reality (VR) and the way they overcome interactional challenges in these emerging social environments. Based on a videographic multimodal single case analysis, we demonstrate how non-speaking VR participants furnish their bodies, at-hand instruments, and their interactive environment for their practical purposes. Our findings are subsequently related to renewed discussions of the relationship between agency and environment, and the co-constructed nature of situated action. We thus aim to contribute to the growing vocabulary of atypical interaction analysis and the broader context of ethnomethodological conceptualizations of unorthodox and fractured interactional ecologies

    Can AI explain AI? Interactive co-construction of explanations among human and artificial agents

    No full text
    Klowait N, Erofeeva M, Lenke M, Horwath I, Buschmeier H. Can AI explain AI? Interactive co-construction of explanations among human and artificial agents. Discourse & Communication. Accepted;18(6).This study investigates the potential of using advanced conversational artificial intelligence (AI) to help people understand complex AI systems. In line with conversation-analytic research, we view the participatory role of AI as dynamically unfolding in a situation rather than being predetermined by its architecture. To study user sensemaking of intransparent AI systems, we set up a naturalistic encounter between human participants and two AI systems developed in-house: a reinforcement learning simulation and a GPT-4-based explainer chatbot. Our results reveal that an explainer-AI only truly functions as such when participants actively engage with it as a co-constructive agent. Both the interface’s spatial configuration and the asynchronous temporal nature of the explainer AI—combined with the users’ presuppositions about its role—contribute to the decision whether to treat the AI as a dialogical co-participant in the interaction. Participants establish evidentiality conventions and sensemaking procedures that may diverge from a system’s intended design or function
    corecore