7,839 research outputs found

    Interactive Search and Exploration in Online Discussion Forums Using Multimodal Embeddings

    Get PDF
    In this paper we present a novel interactive multimodal learning system, which facilitates search and exploration in large networks of social multimedia users. It allows the analyst to identify and select users of interest, and to find similar users in an interactive learning setting. Our approach is based on novel multimodal representations of users, words and concepts, which we simultaneously learn by deploying a general-purpose neural embedding model. We show these representations to be useful not only for categorizing users, but also for automatically generating user and community profiles. Inspired by traditional summarization approaches, we create the profiles by selecting diverse and representative content from all available modalities, i.e. the text, image and user modality. The usefulness of the approach is evaluated using artificial actors, which simulate user behavior in a relevance feedback scenario. Multiple experiments were conducted in order to evaluate the quality of our multimodal representations, to compare different embedding strategies, and to determine the importance of different modalities. We demonstrate the capabilities of the proposed approach on two different multimedia collections originating from the violent online extremism forum Stormfront and the microblogging platform Twitter, which are particularly interesting due to the high semantic level of the discussions they feature

    Fourteenth Biennial Status Report: März 2017 - February 2019

    No full text

    Collocated Collaboration Analytics: Principles and Dilemmas for Mining Multimodal Interaction Data

    Full text link
    © 2019, Copyright © 2017 Taylor & Francis Group, LLC. Learning to collaborate effectively requires practice, awareness of group dynamics, and reflection; often it benefits from coaching by an expert facilitator. However, in physical spaces it is not always easy to provide teams with evidence to support collaboration. Emerging technology provides a promising opportunity to make collocated collaboration visible by harnessing data about interactions and then mining and visualizing it. These collocated collaboration analytics can help researchers, designers, and users to understand the complexity of collaboration and to find ways they can support collaboration. This article introduces and motivates a set of principles for mining collocated collaboration data and draws attention to trade-offs that may need to be negotiated en route. We integrate Data Science principles and techniques with the advances in interactive surface devices and sensing technologies. We draw on a 7-year research program that has involved the analysis of six group situations in collocated settings with more than 500 users and a variety of surface technologies, tasks, grouping structures, and domains. The contribution of the article includes the key insights and themes that we have identified and summarized in a set of principles and dilemmas that can inform design of future collocated collaboration analytics innovations

    Crisis Analytics: Big Data Driven Crisis Response

    Get PDF
    Disasters have long been a scourge for humanity. With the advances in technology (in terms of computing, communications, and the ability to process and analyze big data), our ability to respond to disasters is at an inflection point. There is great optimism that big data tools can be leveraged to process the large amounts of crisis-related data (in the form of user generated data in addition to the traditional humanitarian data) to provide an insight into the fast-changing situation and help drive an effective disaster response. This article introduces the history and the future of big crisis data analytics, along with a discussion on its promise, challenges, and pitfalls

    Instruments for visualization of self, co, and socially shared regulation of learning using multimodal analytics:a systematic review

    Get PDF
    Abstract. This thesis presents a systematic literature review in the intersection of multimodal learning analytics, regulation theories of learning, and visual analytics literature of the last decade (2011- 2021). This review is to collect existing research-based instruments designed to visualize Self-Regulation of Learning (SRL), Co-Regulation of learning (CoRL), and Socially Shared Regulation of learning (SSRL) using dashboards and multimodal data. The inclusion and exclusion criteria used in this review addressed two main aims. First, to distil settings, instruments, constructs, and audiences. Second, to identify visualization used for targets (i.e., cognition, motivation, and emotion), phases (i.e., forethought, performance, and reflection), and types of regulation (i.e., SRL, CoRL, and SSRL). By following the Preferred Reporting Items for Systematic Reviews and MetaAnalyses (PRISMA) guidelines, this thesis included 23 peer-reviewed articles out of 383 articles retrieved from 5 different databases searched in April 2021. The main findings from this literature review are (a) the included articles used theoretical grounding of SRL in all articles while CoRL is used only in 3 articles and SSRL only in 2 articles; (b) most articles used both teachers and students as the audience for visual feedback and operated in online learning settings; (c) selected articles focused mainly on visualizing cognition and motivation (17 articles each) as targets of regulation, while emotion as the target was applied only in 6 articles; (d) The performance phase was common to most of the articles and used various visualizations followed by reflection and forethought phases respectively. Simple visualizations, i.e., progress bar chart, line chart, color coding, are used more frequently than bubble chart, stacked column chart, funnel chart, heat maps, and Sankey diagram. Most of the dashboard instruments identified in the review are still improving their designs. Therefore, the results of this review should be put into the context of future studies to be utilized by researchers and teachers in recognizing the missing targets and phases of SRL, CoRL, and SSRL in visualized feedback. Addressing these could also assist them in giving timely feedback on students’ learning strategies to improve their regulatory skills

    DBCollab: Automated feedback for face-to-face group database design

    Full text link
    © 2017 Asia-Pacific Society for Computers in Education. All rights reserved. Developing effective teamwork and collaboration skills is regarded as a key graduate attribute for employability. As a result, higher education institutions are striving to help students foster these skills through authentic learning scenarios. Although face-to-face (f2f) group tasks are common in most classrooms, it is challenging to collect evidence about the group processes. As a result, to date, it is difficult to assess group tasks in ways other than through teachers' direct observations and students' self-reports, or by measuring the quality of their final product. However, there are other critical aspects of group-work that students need to receive feedback on, for example, interaction dynamics or the collaboration processes. This paper explores the potential of using interactive surfaces and sensors to track key indicators of group-work, to provide automated feedback about epistemic and social aspects. We conducted a pilot study in an authentic classroom, in the context of database design. The contributions of this paper are: 1) the operationalisation of the DBCollab tool as a means for supporting group database design and collecting multimodal traces of the activity using interactive surfaces and sensors; and 2) empirical evidence that points at the potential of presenting these traces to group members in order to provoke immediate and post-hoc productive reflection about their activity
    corecore