10,424 research outputs found
Young children\u27s collaborative interactions in an educational computer environment
This study investigated the collaborative interaction patterns exhibited by five-year old pre-primary children in an educational computer environment. The case study method was used in one pre-primary centre in metropolitan Perth, Western Australia, to examine the patterns of collaborative interaction among young children whilst engaged with the computer. The one event case study was of the interactions exhibited by pre-primary children whilst engaged, in dyads, with the computer within a naturalistic classroom environment. This study involved three phases of data collection. Phase I consisted of observations and videotaping sessions, compilation of written observations, narrative descriptions and relevant field notes on each participant. To assess the children\u27s current social skills and computer competence and their general social interaction with peers, the researcher interviewed the children and their teacher using a semi-structured interview schedule to guide the discussion. Phase IT comprised reviewing and transcribing the videotapes and coding children\u27s interactions, while Phase III consisted of analysing all the data obtained. Both observational comments and descriptions and data analyses were presented with anecdotes. 243 interactions were identified and classified into 16 interaction patterns. They were: directing partner\u27s actions; self-monitor/repetition; providing information; declarative planning; asking for information/explanation; disagreeing with partner; accepting guidance; terminal response; exclaiming; correcting others; defending competence; showing pleasure; showing displeasure; sharing control; defending control; and suggesting ideas. Frequency of occurrence of identified interactions was analysed in the form of descriptive statistics. Factors facilitating the collaborative interaction of children whilst engaged with the computer activities were found to be: developmental appropriateness of the software; preexisting computer competency between children; children\u27s preexisting positive attitude towards computer; mutual friendship between collaborators; children\u27s social goals; appropriate structure of enjoyable learning environment; mutual understanding of turn-taking system; and positive non-isolated physical settings of the computer environment. Factors inhibiting collaborative interaction were identified as: non-developmentally appropriate software; lack of computer competency between children; negative attitude (on the part of both children and teacher) towards computer and learning; sense of competition between collaborators; social goals of each child; inappropriate structure to promote enjoyable learning environment; no mutual understanding of turn-taking system; and isolate physical settings of the computer environment. Associated with the findings were three major variables: (1) the classroom teacher variable (philosophy and educational beliefs, task-structure and computer management); (2) the software variable (developmentally appropriateness, content, design, and programmed task-structure); and (3) the child variable (computer competency and attitude towards computer, social goals, social skills, and personal relationship with collaborators). By identifying the collaborative interactions of children, and factors that may facilitate or inhibit these interactions, early childhood educators will be in a better position to integrate the computer into their classroom and to promote positive prosocial interaction among children whilst engaged at the computer. In general, findings suggest that computers should be integrated into all early childhood classrooms and afforded the same status as other traditional early childhood learning materials and activities
Coordination Matters : Interpersonal Synchrony Influences Collaborative Problem-Solving
The authors thank Martha von Werthern and Caitlin Taylor for their assistance with data collection, Cathy Macpherson for her assistance with the preparation of the manuscript, and Mike Richardson, Alex Paxton, and Rick Dale for providing MATLAB code to assist with data analysis. The research was funded by the British Academy (SG131613).Peer reviewedPublisher PD
Recommended from our members
Gesture in multimodal language learner interaction via videoconferencing on mobile devices
This thesis focuses on how adult English language learners exploit and experience gesture while communicating with one another via mobile technologies. Mobiles create opportunities for multimodal language learning beyond the classroom (Kukulska-Hulme et al., 2017), however, modes such as gesture are mediated and transformed by technology in complex ways (Hampel & Stickler, 2012). In a small-scale qualitative study, learners from a range of nationalities who were studying on language programmes in the UK were connected in dyads via Skype videoconferencing (VC) in order to complete information gap tasks using tablets, 2-in-1 devices, and smartphones. These communicative tasks had been intentionally designed around a diversity of informal ‘settings’ (Benson, 2011) which included cafés, museums, and historical buildings. Following the tasks, participants took part in stimulated recall interviews in order to reflect on their multimodal forms of communication.
This exploratory, qualitative study examines gesture from a theoretical perspective which links the mode to spoken language (Kendon, 2004; McNeill, 1992; Norris, 2004) and positions gesture within the wider framework of the negotiation of meaning (Varonis & Gass, 1985). As the role of speech-associated gestures within language learning via technology has not been widely researched, an interdisciplinary methodology had to be designed to analyse the video recorded data from the learners’ tasks. This is based on transcription procedures from gesture-speech analysis (McNeill, 1992; McNeill & Duncan, 2000). As gesture in this study is understood as being closely aligned to speech, a multimodal unit of analysis was combined with the Varonis and Gass (1985) framework of the negotiation of meaning. The multimodal method allowed for the categorisation and analysis of gesture to investigate how learners may co-orchestrate the two modes in relationship to their deployment of mobile technologies from beyond the classroom. The participants were asked to reflect on their interactions from multimodal perspectives and interview data were triangulated with the task performances. Theoretical and pedagogical conclusions are drawn as to the manner in which learners exploit gesture as an integral part of the negotiation of meaning
Investigations of collaborative design environments: A framework for real-time collaborative 3D CAD
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.This research investigates computer-based collaborative design environments, in particular issues of real-time collaborative 3D CAD. The thesis first presents a broad perspective of collaborative design environments with a preliminary case study of team design activities in a conventional and a computer mediated setting. This study identifies the impact and the feasibility of computer support for collaborative design and suggests four kinds of essential technologies for a successful collaborative design environment: information-sharing systems, synchronous and asynchronous co- working tools, project management systems, and communication systems. A new conceptual framework for a real-time collaborative 3D design tool, Shared Stage, is proposed based upon the preliminary study. The Shared Stage is defined as a shared 3D design workspace aiming to smoothly incorporate shared 3D workspaces into existing individual 3D workspaces. The addition of a Shared Stage allows collaborating designers to interact in real-time and to have a dynamic and interactive exchange of intermediate 3D design data. The acceptability of collaborative features is maximised by maintaining consistency of the user interface between 3D CAD systems. The framework is subsequently implemented as a software prototype using a new software development environment, customised by integrating related real-time and 3D graphic software development tools. Two main components of the Shared Stage module in the prototype, the Synchronised Stage View (SSV) and the Data Structure Diagram (DSD), provide essential collaborative features for real-time collaborative 3D CAD. These features include synchronised shared 3D representation, dynamic data exchange and awareness support in 3D workspaces. The software prototype is subsequently evaluated to examine the usefulness and usability. A range of quantitative and qualitative methods is used to evaluate the impact of the Shared Stage. The results, including the analysis of collaborative interactions and user perception, illustrate that the Shared Stage is a feasible and valuable addition for real-time collaborative 3D CAD. This research identifies the issues to be addressed for collaborative design environments and also provides a new framework and development strategy of a novel real-time collaborative 3D CAD system. The framework is successfully demonstrated through prototype implementation and an analytical usability evaluation.Financial support from the Department and from the UK government through the Overseas Research Studentship Awards
Explorations in engagement for humans and robots
This paper explores the concept of engagement, the process by which
individuals in an interaction start, maintain and end their perceived
connection to one another. The paper reports on one aspect of engagement among
human interactors--the effect of tracking faces during an interaction. It also
describes the architecture of a robot that can participate in conversational,
collaborative interactions with engagement gestures. Finally, the paper reports
on findings of experiments with human participants who interacted with a robot
when it either performed or did not perform engagement gestures. Results of the
human-robot studies indicate that people become engaged with robots: they
direct their attention to the robot more often in interactions where engagement
gestures are present, and they find interactions more appropriate when
engagement gestures are present than when they are not.Comment: 31 pages, 5 figures, 3 table
Reflection-in-Action Markers for Reflection-on-Action in Computer-Supported Collaborative Learning Settings
International audienceWe describe an exploratory study on the use of markers set during a synchronous collaborative interaction (reflection-in-action) for later construction of reflection reports upon the collaboration that occurred (reflection-on-action). During two sessions, pairs of students used the Visu videoconferencing tool for synchronous interaction and marker setting (positive, negative or free) and then individual report building on the interaction (using markers or not). A quantitative descriptive analysis was conducted on the markers put in action, on their use to reflect on action and on the reflection categories of the sentences in these reports. Results show that the students (1) used the markers equally as a note-taking and reflection means during the interaction, (2) used mainly positive markers both to reflect in and on action; (3) paid more attention in identifying what worked in their interaction (conservative direction) rather than in planning on how to improve their group work (progressive direction); (4) used mainly their own markers to reflect on action, with an increase in the use of their partners' markers in the second reflection reports; (5) reflected mainly on their partner in the first reflection reports and more on themselves in the second reports to justify themselves and to express their satisfaction
On Inter-referential Awareness in Collaborative Augmented Reality
For successful collaboration to occur, a workspace must support inter-referential awareness - or the ability for one participant to refer to a set of artifacts in the environment, and for that reference to be correctly interpreted by others. While referring to objects in our everyday environment is a straight-forward task, the non-tangible nature of digital artifacts presents us with new interaction challenges. Augmented reality (AR) is inextricably linked to the physical world, and it is natural to believe that the re-integration of physical artifacts into the workspace makes referencing tasks easier; however, we find that these environments combine the referencing challenges from several computing disciplines, which compound across scenarios. This dissertation presents our studies of this form of awareness in collaborative AR environments. It stems from our research in developing mixed reality environments for molecular modeling, where we explored spatial and multi-modal referencing techniques. To encapsulate the myriad of factors found in collaborative AR, we present a generic, theoretical framework and apply it to analyze this domain. Because referencing is a very human-centric activity, we present the results of an exploratory study which examines the behaviors of participants and how they generate references to physical and virtual content in co-located and remote scenarios; we found that participants refer to content using physical and virtual techniques, and that shared video is highly effective in disambiguating references in remote environments. By implementing user feedback from this study, a follow-up study explores how the environment can passively support referencing, where we discovered the role that virtual referencing plays during collaboration. A third study was conducted in order to better understand the effectiveness of giving and interpreting references using a virtual pointer; the results suggest the need for participants to be parallel with the arrow vector (strengthening the argument for shared viewpoints), as well as the importance of shadows in non-stereoscopic environments. Our contributions include a framework for analyzing the domain of inter-referential awareness, the development of novel referencing techniques, the presentation and analysis of our findings from multiple user studies, and a set of guidelines to help designers support this form of awareness
Collaborative trails in e-learning environments
This deliverable focuses on collaboration within groups of learners, and hence collaborative trails. We begin by reviewing the theoretical background to collaborative learning and looking at the kinds of support that computers can give to groups of learners working collaboratively, and then look more deeply at some of the issues in designing environments to support collaborative learning trails and at tools and techniques, including collaborative filtering, that can be used for analysing collaborative trails. We then review the state-of-the-art in supporting collaborative learning in three different areas – experimental academic systems, systems using mobile technology (which are also generally academic), and commercially available systems. The final part of the deliverable presents three scenarios that show where technology that supports groups working collaboratively and producing collaborative trails may be heading in the near future
Robomorphism: Examining the effects of telepresence robots on between-student cooperation
The global pandemic has stressed the value of working remotely, also in higher education. This development sparks the growing use of telepresence robots, which allow students with prolonged sickness to interact with other students and their teacher remotely. Although telepresence robots are developed to facilitate virtual inclusion, empirical evidence is lacking whether these robots actually enable students to better cooperate with their fellow students compared to other technologies, such as videoconferencing. Therefore, the aim of this research is to compare mediated student interaction supported by a telepresence robot with mediated student interaction supported by videoconferencing. To do so, we conducted an experiment (N = 122) in which participants pairwise and remotely worked together on an assignment, either by using a telepresence robot (N = 58) or by using videoconferencing (N = 64). The findings showed that students that made use of the robot (vs. videoconferencing) experienced stronger feelings of social presence, but also attributed more robotic characteristics to their interaction partner (i.e., robomorphism). Yet, the negative effects of the use of a telepresence robot on cooperation through robomorphism is compensated by the positive effects through social presence. Our study shows that robomorphism is an important concept to consider when studying the effect of human-mediated robot interaction. Designers of telepresence robots should make sure to stimulate social presence, while mitigating possible adverse effects of robomorphism
- …