6,426 research outputs found
Ambient Multi-Camera Personal Documentary
Polymnia is an automated solution for the creation of ambient multi-camera personal documentary films. This short paper introduces the system, emphasising the rule-based documentary generation engine that we have created to assemble an edited narrative from source footage. We describe how such automatically generated media can be integrated with and augment personally-authored images and videos as a contribution to an individual’s personal digital memory
A Semantic Web Annotation Tool for a Web-Based Audio Sequencer
Music and sound have a rich semantic structure which is so clear to the composer and the listener, but that remains mostly hidden to computing machinery. Nevertheless, in recent years, the introduction of software tools for music production have enabled new opportunities for migrating this knowledge from humans to machines. A new generation of these tools may exploit sound samples and semantic information coupling for the creation not only of a musical, but also of a "semantic" composition. In this paper we describe an ontology driven content annotation framework for a web-based audio editing tool. In a supervised approach, during the editing process, the graphical web interface allows the user to annotate any part of the composition with concepts from publicly available ontologies. As a test case, we developed a collaborative web-based audio sequencer that provides users with the functionality to remix the audio samples from the Freesound website and subsequently annotate them. The annotation tool can load any ontology and thus gives users the opportunity to augment the work with annotations on the structure of the composition, the musical materials, and the creator's reasoning and intentions. We believe this approach will provide several novel ways to make not only the final audio product, but also the creative process, first class citizens of the Semantic We
A story environment for learning object annotation and collection : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Computer Science at Massey University, Palmerston North, New Zealand
With the increase in computer power, network bandwidth and availability, e-learning is used more and more widely. In practice e-learning can be applied in a variety of ways, such as providing electronic resources to support teaching and learning, developing computer based tutoring programs or building computer supported collaborative learning environments. Nowadays e-learning becomes significantly important because it can improve the quality of learning through using interactive computers, online communications and information systems in ways that other teaching methods cannot achieve. The important advantage of e-learning is that it offers learners a large amount of sharable and reusable learning resources. The current approaches such as Internet search and learning object repository does not effectively help users to search for appropriate learning objects. The original story concept introduces a new semantic layer between collections of learning objects and learning material. The basic idea of the story concept is to add an interpretative, semantically rich layer, informally called 'Story' between learning objects and learning material that links learning objects according to specific themes and subjects (Heinrich & Andres, 2003a). One motivation behind this approach is to put a more focused, semantic layer on top of untargeted metadata that are commonly used to describe a single learning object. Speaking from an e-learning context the stories build on learning objects and become information resources for learning material. The overall aim of this project was to design and build a story environment to realize the above story concept. The development of the story environment includes story metadata, story environment components, the story browsing and authoring processes, and tools involved in story browsing and authoring. The story concept suggests different types of metadata should be used in a story. This project developed those different metadata specifications to support story environment. Two prototypes of tools have been designed and implemented in this project to allow users to evaluate the story concept and story environment. The story browser helps story readers to read the story narrative and look at a story from different perspectives. The story authoring tool is used by the story authors to author a story. The future work of this project has been identified in the area of adding features of current tools, user testing and further implementation of the story environment
Associating characters with events in films
The work presented here combines the analysis of a film's audiovisual features with the analysis of an accompanying audio description. Specifically, we describe a technique for semantic-based indexing of feature films that associates character names with meaningful events. The technique fuses the results of event detection based on audiovisual features with the inferred on-screen presence of characters, based on an analysis of an audio description script. In an evaluation with 215 events from 11 films, the technique performed the character detection task with Precision = 93% and Recall = 71%. We then go on to show how novel access modes to film content are enabled by our analysis. The specific examples illustrated include video retrieval via a combination of event-type and character name and our first steps towards visualization of narrative and character interplay based on characters occurrence and co-occurrence in events
Recommended from our members
Multilingual media components directly embeddable in open educational resources in science and technology
The use and reuse of OER (Open Educational Resources) depends on several conditions. Amongst others, the richness of their metadata, their granularity and the languages in which they are made available.
This work aims to facilitate efficient production of graphical and language-neutral components. It is assumed that the STEM areas (Science, Technology, Engeneering and Mathematics) share a common mathematical language and, more intuitively, an iconographic approach linked to the structures that satisfy the formulas used in each case. The work is limited to these areas of knowledge, primarily as presentations and animations of very low granularity, which can be directly integrated into larger resources in any language.
The overall research design consists of four stages:
1. Initially, the manual generation of presentations and animations, with no literal in any language, and very concisely focused (mainly, definitions of a single concept for each animation). Determination of common graphics primitives to differentiate the common subtasks: presentation of examples to make the concept emerge inductively, graphical construction of the definition, highlighting the generalization or instantiation steps, homogeneous use of icons for emphasising or posing a question to the observer .. .
2. Evaluation of the expressiveness and effectiveness of these resources. Currently, these resources are being presented to small groups of students. This fall begins a multilingual evaluation process on a larger scale: as part of a regular course at the UNED and as LabSpace course in the Open University. Here we attempt to clarify the appropriate assessment tools (preferably in the same graphics language) with the minimum amount of additional external comments to constitute a course in a particular language.
3. The first two stages must provide an intuitive and graphical interface of the selected formalism (mainly Discrete Mathematics and Logic). The third stage addresses the effect of changing the output device on the selection of the graphics primitives for each generic subtask. Possible variations of the graphical language will be studied in the context of HCI analyses.
4. Finally the approach addresses semi-automatic generation, via script, of these resources: from formal description of the definitions or processes (as described, for example, in OMDoc) to the production of the corresponding animation. Additionally, the injection of semantics should facilitate the link between different animations, the navigation and search of conceptual dependency and the identification of concepts that have supporting collections of resources as described.
At this point, the current development of this work provides results for the first two stages described
LAF-Fabric: a data analysis tool for Linguistic Annotation Framework with an application to the Hebrew Bible
The Linguistic Annotation Framework (LAF) provides a general, extensible
stand-off markup system for corpora. This paper discusses LAF-Fabric, a new
tool to analyse LAF resources in general with an extension to process the
Hebrew Bible in particular. We first walk through the history of the Hebrew
Bible as text database in decennium-wide steps. Then we describe how LAF-Fabric
may serve as an analysis tool for this corpus. Finally, we describe three
analytic projects/workflows that benefit from the new LAF representation:
1) the study of linguistic variation: extract cooccurrence data of common
nouns between the books of the Bible (Martijn Naaijer); 2) the study of the
grammar of Hebrew poetry in the Psalms: extract clause typology (Gino Kalkman);
3) construction of a parser of classical Hebrew by Data Oriented Parsing:
generate tree structures from the database (Andreas van Cranenburgh)
- …