177 research outputs found

    An Integrated Learning Analytics Approach for Virtual Vocational Training Centers

    Get PDF
    Virtual training centers are hosted solutions for the implementation of training courses in the form of e.g. Webinars. Many existing centers neglect the informal and social dimension of vocational training as well as the legitimate business interests of training providers and companies sending their employees. In this paper, we present the virtual training center platform V3C that blends formal, certified virtual training courses with self-regulated and social learning in synchronous and asynchronous learning phases. We have developed an integrated learning analytics approach to collect, store, analyze and visualize data for different purposes like certification, interventions and gradual improvement of the platform. The results given here demonstrate the ability of the platform to deliver data for key performance indicators like learning outcomes and drop-out rates as well as the interplay between synchronous and asynchronous learning phases on a very large scale. Since the platform implementation is open source, results can be easily transferred and exploited in many contexts

    Towards Explainable Interactive Multi-Modal Video Retrieval with vitrivr

    Get PDF
    This paper presents the most recent iteration of the vitrivr multimedia retrieval system for its participation in the Video Browser Showdown (VBS) 2021. Building on existing functionality for interactive multi-modal retrieval, we overhaul query formulation and results presentation for queries which specify temporal context, extend our database with index structures for similarity search and present experimental functionality aimed at improving the explainability of results with the objective of better supporting users in the selection of results and the provision of relevance feedback

    An overview on the evaluated video retrieval tasks at TRECVID 2022

    Full text link
    The TREC Video Retrieval Evaluation (TRECVID) is a TREC-style video analysis and retrieval evaluation with the goal of promoting progress in research and development of content-based exploitation and retrieval of information from digital video via open, tasks-based evaluation supported by metrology. Over the last twenty-one years this effort has yielded a better understanding of how systems can effectively accomplish such processing and how one can reliably benchmark their performance. TRECVID has been funded by NIST (National Institute of Standards and Technology) and other US government agencies. In addition, many organizations and individuals worldwide contribute significant time and effort. TRECVID 2022 planned for the following six tasks: Ad-hoc video search, Video to text captioning, Disaster scene description and indexing, Activity in extended videos, deep video understanding, and movie summarization. In total, 35 teams from various research organizations worldwide signed up to join the evaluation campaign this year. This paper introduces the tasks, datasets used, evaluation frameworks and metrics, as well as a high-level results overview.Comment: arXiv admin note: substantial text overlap with arXiv:2104.13473, arXiv:2009.0998

    MultiVENT: Multilingual Videos of Events with Aligned Natural Text

    Full text link
    Everyday news coverage has shifted from traditional broadcasts towards a wide range of presentation formats such as first-hand, unedited video footage. Datasets that reflect the diverse array of multimodal, multilingual news sources available online could be used to teach models to benefit from this shift, but existing news video datasets focus on traditional news broadcasts produced for English-speaking audiences. We address this limitation by constructing MultiVENT, a dataset of multilingual, event-centric videos grounded in text documents across five target languages. MultiVENT includes both news broadcast videos and non-professional event footage, which we use to analyze the state of online news videos and how they can be leveraged to build robust, factually accurate models. Finally, we provide a model for complex, multilingual video retrieval to serve as a baseline for information retrieval using MultiVENT

    Deep Learning-based Concept Detection in vitrivr at the Video Browser Showdown 2019 - Final Notes

    Full text link
    This paper presents an after-the-fact summary of the participation of the vitrivr system to the 2019 Video Browser Showdown. Analogously to last year's report, the focus of this paper lies on additions made since the original publication and the system's performance during the competition
    corecore