34 research outputs found

    Shifting embodied participation in multiparty university student meetings

    Get PDF
    PhD ThesisStudent group work has been used in higher education as an effective means to cultivate students’ work-related skills and cooperative learning. These encounters of small groups are the sites where, through talk and other resources, university students get their educational tasks done as well as acquire essential workplace skills such as problem-solving, team working, decision-making and leadership. However, settings of educational talk-as-work, such as student group meetings, remain under-researched (Stokoe, Benwell, & Attenborough, 2013). The present study therefore attempts to bridge this gap by investigating the professional and academic abilities of university students to participate in multiparty group meetings, drawing upon a dataset of video- and audio-recorded meetings from the Newcastle University Corpus of Academic English (NUCASE). The dataset consists of ten hours of meetings in which a group of naval architecture undergraduate students work cooperatively on their final year project – to design and build a wind turbine. The study applies the methodological approach of conversation analysis (CA) with a multimodal perspective. It presents a fine-detailed, sequential multimodal analysis of a collection of cases of speaker transitions, and reveals how meeting participants display speakership and recipiency with their verbal/vocal and bodily-visual coordination. In this respect, the present study is the first to offer a systematic collection, as well as a thorough investigation, of speaker transition and turn-taking practices from a multimodal perspective, especially with the scope of analysis beyond pre-turn and turn-beginning positions. It shows how speaker transitions through ‘current speaker selects next’ and ‘next speaker self-selects’ are joint-undertakings not only between the self-selecting/current speaker, and the target recipient/addressed next speaker, but also among other co-present participants. Especially, by mobilising the whole set of multimodal resources, participants are able to display their multiple orientations toward their co-participants, project, pursue and accomplish multiple courses of action in concurrence, and intricately coordinate their mutual orientation toward the shifting and emerging participation framework during the transition, establishment and maintenance of the speakership and recipiency. By presenting the data and analysis, this study extends ii boundaries of existing understandings on the temporality, sequentiality and systematicity of multimodal resources in talk-and-bodies-in-interaction. The thesis also contributes to interaction research in the particular context of student group work in higher education contexts, by providing a ‘screenshot’ of students’ academic lives as it unfolds ‘in flight’. Particularly, it reveals how students competently participate in multiparty group meetings (e.g., taking and allocating turns), co-construct the unfolding meeting procedures (e.g., roundtable update discussion), and jointly achieve the local interactional goals (e.g., sharing work progress, reaching an agreement). Acquiring such skills is, as it argues above, not only crucial for accomplishing the educational tasks, but also necessary for preparing university students to fulfill their future workplace expectations. The study therefore further informs the practices of university students and professional practitioners in multiparty meetings, and also draws on methodological implications for multimodal CA research

    Context-based multimodal interpretation : an integrated approach to multimodal fusion and discourse processing

    Get PDF
    This thesis is concerned with the context-based interpretation of verbal and nonverbal contributions to interactions in multimodal multiparty dialogue systems. On the basis of a detailed analysis of context-dependent multimodal discourse phenomena, a comprehensive context model is developed. This context model supports the resolution of a variety of referring and elliptical expressions as well as the processing and reactive generation of turn-taking signals and the identification of the intended addressee(s) of a contribution. A major goal of this thesis is the development of a generic component for multimodal fusion and discourse processing. Based on the integration of this component into three distinct multimodal dialogue systems, the generic applicability of the approach is shown.Diese Dissertation befasst sich mit der kontextbasierten Interpretation von verbalen und nonverbalen Gesprächsbeiträgen im Rahmen von multimodalen Dialogsystemen. Im Rahmen dieser Arbeit wird, basierend auf einer detaillierten Analyse multimodaler Diskursphänomene, ein umfassendes Modell des Gesprächskontextes erarbeitet. Dieses Modell soll sowohl die Verarbeitung einer Vielzahl von referentiellen und elliptischen Ausdrücken, als auch die Erzeugung reaktiver Aktionen wie sie für den Sprecherwechsel benötigt werden unterstützen. Ein zentrales Ziel dieser Arbeit ist die Entwicklung einer generischen Komponente zur multimodalen Fusion und Diskursverarbeitung. Anhand der Integration dieser Komponente in drei unterschiedliche Dialogsysteme soll der generische Charakter dieser Komponente gezeigt werden

    Situated Displays in Telecommunication

    Get PDF
    In face to face conversation, numerous cues of attention, eye contact, and gaze direction provide important channels of information. These channels create cues that include turn taking, establish a sense of engagement, and indicate the focus of conversation. However, some subtleties of gaze can be lost in common videoconferencing systems, because the single perspective view of the camera doesn't preserve the spatial characteristics of the face to face situation. In particular, in group conferencing, the `Mona Lisa effect' makes all observers feel that they are looked at when the remote participant looks at the camera. In this thesis, we present designs and evaluations of four novel situated teleconferencing systems, which aim to improve the teleconferencing experience. Firstly, we demonstrate the effectiveness of a spherical video telepresence system in that it allows a single observer at multiple viewpoints to accurately judge where the remote user is placing their gaze. Secondly, we demonstrate the gaze-preserving capability of a cylindrical video telepresence system, but for multiple observers at multiple viewpoints. Thirdly, we demonstrated the further improvement of a random hole autostereoscopic multiview telepresence system in conveying gaze by adding stereoscopic cues. Lastly, we investigate the influence of display type and viewing angle on how people place their trust during avatar-mediated interaction. The results show the spherical avatar telepresence system has the ability to be viewed qualitatively similarly from all angles and demonstrate how trust can be altered depending on how one views the avatar. Together these demonstrations motivate the further study of novel display configurations and suggest parameters for the design of future teleconferencing systems

    Modelling the Multi in Multi-Party Communication

    Get PDF
    This thesis investigates the effects of multimedia communications technology on the interaction of mixed- and same-role groups. The first study explores the effect of video and audio conferencing on small, role-differentiated problem-solving groups in the laboratory. The second laboratory study examines the impact of shared video technology on the communication of role-undifferentiated groups. A multi-faceted analytical approach is employed, including indices of task performance, process and content of communication, patterns of interaction and subjective user evaluations. Lastly, a field study looks at how the communication process of business meetings is affected by status constraints and audio conferencing technology. The findings show that both multimedia video and audio communications technology have similar impacts on the patterns of speaker contributions in different types and sizes of groups, and that the extent of their effect is influenced by the presence or absence of role differences between group members - whether experimentally manipulated in the laboratory or organisationally assigned roles in a naturalistic setting. Technology-mediation appears to exaggerate the impact of status and role such that group members say more disparate amounts and interact less freely than in face-to-face groups, in particular it exaggerates the dominance of one individual. Surprisingly, multimedia conferencing technology can support free and equal participation in groups whose speakers have similar roles but evidence of its effect on speakers of similar status is equivocal. The implications for communication outcome and design of communications technology are discussed

    Proceedings of the 1st joint workshop on Smart Connected and Wearable Things 2016

    Get PDF
    These are the Proceedings of the 1st joint workshop on Smart Connected and Wearable Things (SCWT'2016, Co-located with IUI 2016). The SCWT workshop integrates the SmartObjects and IoWT workshops. It focusses on the advanced interactions with smart objects in the context of the Internet-of-Things (IoT), and on the increasing popularity of wearables as advanced means to facilitate such interactions

    Human-Computer Interaction

    Get PDF
    In this book the reader will find a collection of 31 papers presenting different facets of Human Computer Interaction, the result of research projects and experiments as well as new approaches to design user interfaces. The book is organized according to the following main topics in a sequential order: new interaction paradigms, multimodality, usability studies on several interaction mechanisms, human factors, universal design and development methodologies and tools

    Gaze Analysis methods for Learning Analytics

    Get PDF
    Eye-tracking had been shown to be predictive of expertise, task-based success, task-difficulty, and the strategies involved in problem solving, both in the individual and collaborative settings. In learning analytics, eye-tracking could be used as a powerful tool, not only to differentiate between the levels of expertise and task-outcome, but also to give constructive feedback to the users. In this dissertation, we show how eye-tracking could prove to be useful to understand the cognitive processes underlying dyadic interaction; in two contexts: pair program comprehension and learning with a Massive Open Online Course (MOOC). The first context is a typical collaborative work scenario, while the second is a special case of dyadic interaction namely the teacher-student pair. We also demonstrate, using one example experiment, how the findings about the relation between the learning outcome in MOOCs and the students' gaze patterns can be leveraged to design a feedback tool to improve the students' learning outcome and their attention levels while learning through a MOOC video. We also show that the gaze can also be used as a cue to resolve the teachers' verbal references in a MOOC video; and this way we can improve the learning experiences of the MOOC students. This thesis is comprised of five studies. The first study, contextualised within a collaborative setting, where the collaborating partners tried to understand the given program. In this study, we examine the relationship among the gaze patterns of the partners, their dialogues and the levels of understanding that the pair attained at the end of the task. The next four studies are contextualised within the MOOC environment. The first MOOC study explores the relationship between the students' performance and their attention level. The second MOOC study, which is a dual eye-tracking study, examines the relation between the individual and collaborative gaze patterns and their relation with the learning outcome. This study also explores the idea of activating students' knowledge, prior to receiving any learning material, and the effect of different ways to activate the students' knowledge on their gaze patterns and their learning outcomes. The third MOOC study, during which we designed a feedback tool based on the results of the first two MOOC studies, demonstrates that the variables we proposed to measure the students' attention, could be leveraged upon to provide feedback about their gaze patterns. We also show that using this feedback tool improves the students' learning outcome and their attention levels. The fourth and final MOOC study shows that augmenting a MOOC video with the teacher's gaze information helps improving the learning experiences of the students. When the teacher's gaze is displayed the perceived difficulty of the content decreases significantly as compared to the moments when there is no gaze augmentation. In a nutshell, through this dissertation, we show that the gaze can be used to understand, support and improve the dyadic interaction, in order to increase the chances of achieving a higher level of task-based success

    Recent Developments in Smart Healthcare

    Get PDF
    Medicine is undergoing a sector-wide transformation thanks to the advances in computing and networking technologies. Healthcare is changing from reactive and hospital-centered to preventive and personalized, from disease focused to well-being centered. In essence, the healthcare systems, as well as fundamental medicine research, are becoming smarter. We anticipate significant improvements in areas ranging from molecular genomics and proteomics to decision support for healthcare professionals through big data analytics, to support behavior changes through technology-enabled self-management, and social and motivational support. Furthermore, with smart technologies, healthcare delivery could also be made more efficient, higher quality, and lower cost. In this special issue, we received a total 45 submissions and accepted 19 outstanding papers that roughly span across several interesting topics on smart healthcare, including public health, health information technology (Health IT), and smart medicine

    Human-Robot Collaborations in Industrial Automation

    Get PDF
    Technology is changing the manufacturing world. For example, sensors are being used to track inventories from the manufacturing floor up to a retail shelf or a customer’s door. These types of interconnected systems have been called the fourth industrial revolution, also known as Industry 4.0, and are projected to lower manufacturing costs. As industry moves toward these integrated technologies and lower costs, engineers will need to connect these systems via the Internet of Things (IoT). These engineers will also need to design how these connected systems interact with humans. The focus of this Special Issue is the smart sensors used in these human–robot collaborations

    Identifying Social Signals from Human Body Movements for Intelligent Technologies

    Get PDF
    Numerous Human-Computer Interaction (HCI) contexts require the identification of human internal states such as emotions, intentions, and states such as confusion and task engagement. Recognition of these states allows for artificial agents and interactive systems to provide appropriate responses to their human interaction partner. Whilst numerous solutions have been developed, many of these have been designed to classify internal states in a binary fashion, i.e. stating whether or not an internal state is present. One of the potential drawbacks of these approaches is that they provide a restricted, reductionist view of the internal states being experienced by a human user. As a result, an interactive agent which makes response decisions based on such a binary recognition system would be restricted in terms of the flexibility and appropriateness of its responses. Thus, in many settings, internal state recognition systems would benefit from being able to recognize multiple different ‘intensities’ of an internal state. However, for most classical machine learning approaches, this requires that a recognition system be trained on examples from every intensity (e.g. high, medium and low intensity task engagement). Obtaining such a training data-set can be both time- and resource-intensive. This project set out to explore whether this data requirement could be reduced whilst still providing an artificial recognition system able to provide multiple classification labels. To this end, this project first identified a set of internal states that could be recognized from human behaviour information available in a pre-existing data set. These explorations revealed that states relating to task engagement could be identified, by human observers, from human movement and posture information. A second set of studies was then dedicated to developing and testing different approaches to classifying three intensities of task engagement (high, intermediate and low) after training only on examples from the high and low task engagement data sets. The result of these studies was the development of an approach which incorporated the recently developed Legendre Memory Units, and was shown to produce an output which could be used to distinguish between all three task engagement intensities after being trained on only examples of high and low intensity task engagement. Thus this project presents the foundation work for internal state recognition systems which require less data whilst providing more classification labels
    corecore