5,904 research outputs found

    Meetings and Meeting Modeling in Smart Environments

    Get PDF
    In this paper we survey our research on smart meeting rooms and its relevance for augmented reality meeting support and virtual reality generation of meetings in real time or off-line. The research reported here forms part of the European 5th and 6th framework programme projects multi-modal meeting manager (M4) and augmented multi-party interaction (AMI). Both projects aim at building a smart meeting environment that is able to collect multimodal captures of the activities and discussions in a meeting room, with the aim to use this information as input to tools that allow real-time support, browsing, retrieval and summarization of meetings. Our aim is to research (semantic) representations of what takes place during meetings in order to allow generation, e.g. in virtual reality, of meeting activities (discussions, presentations, voting, etc.). Being able to do so also allows us to look at tools that provide support during a meeting and at tools that allow those not able to be physically present during a meeting to take part in a virtual way. This may lead to situations where the differences between real meeting participants, human-controlled virtual participants and (semi-) autonomous virtual participants disappear

    Speech & Multimodal Resources: the Herme Database of Spontaneous Multimodal Human-Robot Dialogues

    Get PDF
    This paper presents methodologies and tools for language resource (LR) construction. It describes a database of interactive speech collected over a three-month period at the Science Gallery in Dublin, where visitors could take part in a conversation with a robot. The system collected samples of informal, chatty dialogue – normally difficult to capture under laboratory conditions for human-human dialogue, and particularly so for human-machine interaction. The conversations were based on a script followed by the robot consisting largely of social chat with some task-based elements. The interactions were audio-visually recorded using several cameras together with microphones. As part of the conversation the participants were asked to sign a consent form giving permission to use their data for human-machine interaction research. The multimodal corpus will be made available to interested researchers and the technology developed during the three-month exhibition is being extended for use in education and assisted-living applications

    Visualisation of Interactions in Online Collaborative Learning Environments

    Get PDF
    Much research in recent years has focused on the introduction of ‘Virtual Learning Environments’ (VLE’s) to universities, documenting practice and sharing experience. Communicative tools are the means by which VLE’s have the potential to transform learning with computers from being passive and transmissive in nature, to being active and constructivist. Attention has been directed towards the importance of online dialogue as a defining feature of the VLE. However, practical methods of reviewing and analysing online communication to encode and trace cycles of real dialogue (and learning) have proved somewhat elusive. Qualitative methods are under-used for VLE discussions, since they demand new sets of research skills for those unfamiliar with those methods. Additionally, it can be time-intensive to learn them. This thesis aims to build an improved and simple-to-use analytical tool for Moodle that will aid and support teachers and administrators to understand and analyse interaction patterns and knowledge construction of the participants involved in ongoing online interactions. After reviewing the strengths and shortcomings of the existing visualisation models, a new visualisation tool called the Virtual Interaction Mapping System (VIMS) is proposed which is based on a framework proposed by Schrire (2004) to graphically represent social presence and manage the online communication patterns of the learners using Moodle. VIMS produces multiple possible views of interaction data so that it can be evaluated from many perspectives; it can be used to represent interaction data both qualitatively and quantitatively. The units of analysis can be represented graphically and numerically for more extensive evaluation. Specifically, these indicators are communication type, participative level, meaningful content of discussion, presence of lurkers, presence of moderators, and performance of participants individually and as a group. It thus enables assessment of the triangular relationship between conversationcontent, online participation and learnin

    Emergent leaders through looking and speaking: from audio-visual data to multimodal recognition

    Get PDF
    In this paper we present a multimodal analysis of emergent leadership in small groups using audio-visual features and discuss our experience in designing and collecting a data corpus for this purpose. The ELEA Audio-Visual Synchronized corpus (ELEA AVS) was collected using a light portable setup and contains recordings of small group meetings. The participants in each group performed the winter survival task and filled in questionnaires related to personality and several social concepts such as leadership and dominance. In addition, the corpus includes annotations on participants’ performance in the survival task, and also annotations of social concepts from external viewers. Based on this corpus, we present the feasibility of predicting the emergent leader in small groups using automatically extracted audio and visual features, based on speaking turns and visual attention, and we focus specifically on multimodal features that make use of the looking at participants while speaking and looking at while not speaking measures. Our findings indicate that emergent leadership is related, but not equivalent, to dominance, and while multimodal features bring a moderate degree of effectiveness in inferring the leader, much simpler features extracted from the audio channel are found to give better performance

    SID 04, Social Intelligence Design:Proceedings Third Workshop on Social Intelligence Design

    Get PDF

    Documenting and Assessing Learning in Informal and Media-Rich Environments

    Get PDF
    An extensive review of the literature on learning assessment in informal settings, expert discussion of key issues, and a new model for good assessment practice. Today educational activities take place not only in school but also in after-school programs, community centers, museums, and online communities and forums. The success and expansion of these out-of-school initiatives depends on our ability to document and assess what works and what doesn't in informal learning, but learning outcomes in these settings are often unpredictable. Goals are open-ended; participation is voluntary; and relationships, means, and ends are complex. This report charts the state of the art for learning assessment in informal settings, offering an extensive review of the literature, expert discussion on key topics, a suggested model for comprehensive assessment, and recommendations for good assessment practices.Drawing on analysis of the literature and expert opinion, the proposed model, the Outcomes-by-Levels Model for Documentation and Assessment, identifies at least ten types of valued outcomes, to be assessed in terms of learning at the project, group, and individual levels. The cases described in the literature under review, which range from promoting girls' identification with STEM practices to providing online resources for learning programming and networking, illustrate the usefulness of the assessment model
    corecore