7,048 research outputs found
An MPEG-7 scheme for semantic content modelling and filtering of digital video
Abstract Part 5 of the MPEG-7 standard specifies Multimedia Description Schemes (MDS); that is, the format multimedia content models should conform to in order to ensure interoperability across multiple platforms and applications. However, the standard does not specify how the content or the associated model may be filtered. This paper proposes an MPEG-7 scheme which can be deployed for digital video content modelling and filtering. The proposed scheme, COSMOS-7, produces rich and multi-faceted semantic content models and supports a content-based filtering approach that only analyses content relating directly to the preferred content requirements of the user. We present details of the scheme, front-end systems used for content modelling and filtering and experiences with a number of users
Empowering cultural heritage professionals with tools for authoring and deploying personalised visitor experiences
This paper presents an authoring environment, which supports cultural heritage professionals in the process of creating and deploying a wide range of different personalised interactive experiences that combine the physical (objects, collection and spaces) and the digital (multimedia content). It is based on a novel flexible formalism that represents the content and the context as independent from one another and allows recombining them in multiple ways thus generating many different interactions from the same elements. The authoring environment was developed in a co-design process with heritage stakeholders and addresses the composition of the content, the definition of the personalisation, and the deployment on a physical configuration of bespoke devices. To simplify the editing while maintaining a powerful representation, the complex creation process is deconstructed into a limited number of elements and phases, including aspects to control personalisation both in content and in interaction. The user interface also includes examples of installations for inspiration and as a means for learning what is possible and how to do it. Throughout the paper, installations in public exhibitions are used to illustrate our points and what our authoring environment can produce. The expressiveness of the formalism and the variety of interactive experiences that could be created was assessed via a range of laboratory tests, while a user-centred evaluation with over 40 cultural heritage professionals assessed whether they feel confident in directly controlling personalisation
Indexing, browsing and searching of digital video
Video is a communications medium that normally brings together moving pictures with a synchronised audio track into a discrete piece or pieces of information. The size of a “piece ” of video can variously be referred to as a frame, a shot, a scene, a clip, a programme or an episode, and these are distinguished by their lengths and by their composition. We shall return to the definition of each of these in section 4 this chapter. In modern society, video is ver
MoSculp: Interactive Visualization of Shape and Time
We present a system that allows users to visualize complex human motion via
3D motion sculptures---a representation that conveys the 3D structure swept by
a human body as it moves through space. Given an input video, our system
computes the motion sculptures and provides a user interface for rendering it
in different styles, including the options to insert the sculpture back into
the original video, render it in a synthetic scene or physically print it.
To provide this end-to-end workflow, we introduce an algorithm that estimates
that human's 3D geometry over time from a set of 2D images and develop a
3D-aware image-based rendering approach that embeds the sculpture back into the
scene. By automating the process, our system takes motion sculpture creation
out of the realm of professional artists, and makes it applicable to a wide
range of existing video material.
By providing viewers with 3D information, motion sculptures reveal space-time
motion information that is difficult to perceive with the naked eye, and allow
viewers to interpret how different parts of the object interact over time. We
validate the effectiveness of this approach with user studies, finding that our
motion sculpture visualizations are significantly more informative about motion
than existing stroboscopic and space-time visualization methods.Comment: UIST 2018. Project page: http://mosculp.csail.mit.edu
WonderFlow: Narration-Centric Design of Animated Data Videos
Creating an animated data video enriched with audio narration takes a
significant amount of time and effort and requires expertise. Users not only
need to design complex animations, but also turn written text scripts into
audio narrations and synchronize visual changes with the narrations. This paper
presents WonderFlow, an interactive authoring tool, that facilitates
narration-centric design of animated data videos. WonderFlow allows authors to
easily specify a semantic link between text and the corresponding chart
elements. Then it automatically generates audio narration by leveraging
text-to-speech techniques and aligns the narration with an animation.
WonderFlow provides a visualization structure-aware animation library designed
to ease chart animation creation, enabling authors to apply pre-designed
animation effects to common visualization components. It also allows authors to
preview and iteratively refine their data videos in a unified system, without
having to switch between different creation tools. To evaluate WonderFlow's
effectiveness and usability, we created an example gallery and conducted a user
study and expert interviews. The results demonstrated that WonderFlow is easy
to use and simplifies the creation of data videos with narration-animation
interplay
ISAR: Ein Autorensystem für Interaktive Tische
Developing augmented reality systems involves several challenges, that prevent end users and experts from non-technical domains, such as education, to experiment with this technology. In this research we introduce ISAR, an authoring system for augmented reality tabletops targeting users from non-technical domains. ISAR allows non-technical users to create their own interactive tabletop applications and experiment with the use of this technology in domains such as educations, industrial training, and medical rehabilitation.Die Entwicklung von Augmented-Reality-Systemen ist mit mehreren Herausforderungen verbunden, die Endbenutzer und Experten aus nicht-technischen Bereichen, wie z.B. dem Bildungswesen, daran hindern, mit dieser Technologie zu experimentieren. In dieser Forschung stellen wir ISAR vor, ein Autorensystem für Augmented-Reality-Tabletops, das sich an Benutzer aus nicht-technischen Bereichen richtet. ISAR ermöglicht es nicht-technischen Anwendern, ihre eigenen interaktiven Tabletop-Anwendungen zu erstellen und mit dem Einsatz dieser Technologie in Bereichen wie Bildung, industrieller Ausbildung und medizinischer Rehabilitation zu experimentieren
- …