35,780 research outputs found

    VRMoViAn - An Immersive Data Annotation Tool for Visual Analysis of Human Interactions in VR

    Get PDF
    Understanding human behavior in virtual reality (VR) is a key component for developing intelligent systems to enhance human focused VR experiences. The ability to annotate human motion data proves to be a very useful way to analyze and understand human behavior. However, due to the complexity and multi-dimensionality of human activity data, it is necessary to develop software that can display the data in a comprehensible way and can support intuitive data annotation for developing machine learning models able recognize and assist human motions in VR (e.g., remote physical therapy). Although past research has been done to improve VR data visualization, no emphasis has been put into VR data annotation specifically for future machine learning applications. To fill this gap, we have developed a data annotation tool capable of displaying complex VR data in an expressive 3D animated format as well as providing an easily-understandable user interface that allows users to annotate and label human activity efficiently. Specifically, it can convert multiple motion data files into a watchable 3D video, and effectively demonstrate body motion: including eye tracking of the player in VR using animations as well as showcasing hand-object interactions with level-of-detail visualization features. The graphical user interface allows the user to interact and annotate VR data just like they do with other video playback tools. Our next step is to develop and integrate machine learning based clusters to automate data annotation. A user study is being planned to evaluate the tool in terms of user-friendliness and effectiveness in assisting with visualizing and analyzing human behavior along with the ability to easily and accurately annotate real-world datasets

    ELAN as flexible annotation framework for sound and image processing detectors

    Get PDF
    Annotation of digital recordings in humanities research still is, to a largeextend, a process that is performed manually. This paper describes the firstpattern recognition based software components developed in the AVATecH projectand their integration in the annotation tool ELAN. AVATecH (AdvancingVideo/Audio Technology in Humanities Research) is a project that involves twoMax Planck Institutes (Max Planck Institute for Psycholinguistics, Nijmegen,Max Planck Institute for Social Anthropology, Halle) and two FraunhoferInstitutes (Fraunhofer-Institut für Intelligente Analyse- undInformationssysteme IAIS, Sankt Augustin, Fraunhofer Heinrich-Hertz-Institute,Berlin) and that aims to develop and implement audio and video technology forsemi-automatic annotation of heterogeneous media collections as they occur inmultimedia based research. The highly diverse nature of the digital recordingsstored in the archives of both Max Planck Institutes, poses a huge challenge tomost of the existing pattern recognition solutions and is a motivation to makesuch technology available to researchers in the humanities

    Crowdsourcing in Computer Vision

    Full text link
    Computer vision systems require large amounts of manually annotated data to properly learn challenging visual concepts. Crowdsourcing platforms offer an inexpensive method to capture human knowledge and understanding, for a vast number of visual perception tasks. In this survey, we describe the types of annotations computer vision researchers have collected using crowdsourcing, and how they have ensured that this data is of high quality while annotation effort is minimized. We begin by discussing data collection on both classic (e.g., object recognition) and recent (e.g., visual story-telling) vision tasks. We then summarize key design decisions for creating effective data collection interfaces and workflows, and present strategies for intelligently selecting the most important data instances to annotate. Finally, we conclude with some thoughts on the future of crowdsourcing in computer vision.Comment: A 69-page meta review of the field, Foundations and Trends in Computer Graphics and Vision, 201

    Leveraging video annotations in video-based e-learning

    Get PDF
    The e-learning community has been producing and using video content for a long time, and in the last years, the advent of MOOCs greatly relied on video recordings of teacher courses. Video annotations are information pieces that can be anchored in the temporality of the video so as to sustain various processes ranging from active reading to rich media editing. In this position paper we study how video annotations can be used in an e-learning context - especially MOOCs - from the triple point of view of pedagogical processes, current technical platforms functionalities, and current challenges. Our analysis is that there is still plenty of room for leveraging video annotations in MOOCs beyond simple active reading, namely live annotation, performance annotation and annotation for assignment; and that new developments are needed to accompany this evolution.Comment: 7th International Conference on Computer Supported Education (CSEDU), Barcelone : Spain (2014

    Interaction Issues in Computer Aided Semantic\ud Annotation of Multimedia

    Get PDF
    The CASAM project aims to provide a tool for more efficient and effective annotation of multimedia documents through collaboration between a user and a system performing an automated analysis of the media content. A critical part of the project is to develop a user interface which best supports both the user and the system through optimal human-computer interaction. In this paper we discuss the work undertaken, the proposed user interface and underlying interaction issues which drove its development

    A lightweight web video model with content and context descriptions for integration with linked data

    Get PDF
    The rapid increase of video data on the Web has warranted an urgent need for effective representation, management and retrieval of web videos. Recently, many studies have been carried out for ontological representation of videos, either using domain dependent or generic schemas such as MPEG-7, MPEG-4, and COMM. In spite of their extensive coverage and sound theoretical grounding, they are yet to be widely used by users. Two main possible reasons are the complexities involved and a lack of tool support. We propose a lightweight video content model for content-context description and integration. The uniqueness of the model is that it tries to model the emerging social context to describe and interpret the video. Our approach is grounded on exploiting easily extractable evolving contextual metadata and on the availability of existing data on the Web. This enables representational homogeneity and a firm basis for information integration among semantically-enabled data sources. The model uses many existing schemas to describe various ontology classes and shows the scope of interlinking with the Linked Data cloud

    The VIA Annotation Software for Images, Audio and Video

    Full text link
    In this paper, we introduce a simple and standalone manual annotation tool for images, audio and video: the VGG Image Annotator (VIA). This is a light weight, standalone and offline software package that does not require any installation or setup and runs solely in a web browser. The VIA software allows human annotators to define and describe spatial regions in images or video frames, and temporal segments in audio or video. These manual annotations can be exported to plain text data formats such as JSON and CSV and therefore are amenable to further processing by other software tools. VIA also supports collaborative annotation of a large dataset by a group of human annotators. The BSD open source license of this software allows it to be used in any academic project or commercial application.Comment: to appear in Proceedings of the 27th ACM International Conference on Multimedia (MM '19), October 21-25, 2019, Nice, France. ACM, New York, NY, USA, 4 page
    corecore