10,827 research outputs found
Automatic Ontology Generation Based On Semantic Audio Analysis
PhDOntologies provide an explicit conceptualisation of a domain and a uniform framework that
represents domain knowledge in a machine interpretable format. The Semantic Web heavily relies
on ontologies to provide well-defined meaning and support for automated services based on the
description of semantics. However, considering the open, evolving and decentralised nature of the
SemanticWeb â though many ontology engineering tools have been developed over the last decade
â it can be a laborious and challenging task to deal with manual annotation, hierarchical structuring
and organisation of data as well as maintenance of previously designed ontology structures. For
these reasons, we investigate how to facilitate the process of ontology construction using semantic
audio analysis.
The work presented in this thesis contributes to solving the problems of knowledge acquisition
and manual construction of ontologies. We develop a hybrid system that involves a formal method
of automatic ontology generation for web-based audio signal processing applications. The proposed
system uses timbre features extracted from audio recordings of various musical instruments.
The proposed system is evaluated using a database of isolated notes and melodic phrases
recorded in neutral conditions, and we make a detailed comparison between musical instrument
recognition models to investigate their effects on the automatic ontology generation system. Finally,
the automatically-generated musical instrument ontologies are evaluated in comparison with
the terminology and hierarchical structure of the Hornbostel and Sachs organology system. We
show that the proposed system is applicable in multi-disciplinary fields that deal with knowledge
management and knowledge representation issues.Fundings from EPSRC, OMRAS-2 and NEMA projects
Video semantic content analysis framework based on ontology combined MPEG-7
The rapid increase in the available amount of video data is creating a growing demand for efficient methods for understanding and managing it at the semantic level. New multimedia standard, MPEG-7, provides the rich functionalities to enable the generation of audiovisual descriptions and is expressed solely in XML Schema which provides little support for expressing semantic knowledge. In this paper, a video semantic content analysis framework based on ontology combined MPEG-7 is presented. Domain
ontology is used to define high level semantic concepts and their relations in the context of the examined domain. MPEG-7 metadata terms of audiovisual descriptions and video content analysis algorithms are expressed in this ontology to enrich video semantic analysis. OWL is used for the ontology description. Rules in Description Logic are defined to describe how low-level features and algorithms for video analysis should be applied according to different perception content. Temporal Description Logic is used to describe the
semantic events, and a reasoning algorithm is proposed for events detection. The proposed framework is demonstrated in sports video domain and shows promising results
Video semantic content analysis based on ontology
The rapid increase in the available amount of video data is creating a growing demand for efficient methods for understanding and managing it at the semantic level. New multimedia standards, such as MPEG-4 and MPEG-7, provide the basic functionalities in order to manipulate and transmit objects and metadata. But importantly, most of the content of video data at a semantic level is out of the scope of the standards. In this paper, a video semantic content analysis framework based on ontology is presented. Domain ontology is used to define high level semantic concepts and their relations in the context of the examined domain. And low-level features (e.g. visual and aural) and video content analysis algorithms are integrated into the ontology to enrich video semantic analysis. OWL is used for the ontology description. Rules in Description Logic are defined to describe how features and algorithms for video analysis should be applied according to different perception content and low-level features. Temporal Description Logic is used to describe the semantic events, and a reasoning algorithm is proposed for events detection. The proposed framework is demonstrated in a soccer video domain and shows promising results
Identifying Web Tables - Supporting a Neglected Type of Content on the Web
The abundance of the data in the Internet facilitates the improvement of
extraction and processing tools. The trend in the open data publishing
encourages the adoption of structured formats like CSV and RDF. However, there
is still a plethora of unstructured data on the Web which we assume contain
semantics. For this reason, we propose an approach to derive semantics from web
tables which are still the most popular publishing tool on the Web. The paper
also discusses methods and services of unstructured data extraction and
processing as well as machine learning techniques to enhance such a workflow.
The eventual result is a framework to process, publish and visualize linked
open data. The software enables tables extraction from various open data
sources in the HTML format and an automatic export to the RDF format making the
data linked. The paper also gives the evaluation of machine learning techniques
in conjunction with string similarity functions to be applied in a tables
recognition task.Comment: 9 pages, 4 figure
Methodological considerations concerning manual annotation of musical audio in function of algorithm development
In research on musical audio-mining, annotated music databases are needed which allow the development of computational tools that extract from the musical audiostream the kind of high-level content that users can deal with in Music Information Retrieval (MIR) contexts. The notion of musical content, and therefore the notion of annotation, is ill-defined, however, both in the syntactic and semantic sense. As a consequence, annotation has been approached from a variety of perspectives (but mainly linguistic-symbolic oriented), and a general methodology is lacking. This paper is a step towards the definition of a general framework for manual annotation of musical audio in function of a computational approach to musical audio-mining that is based on algorithms that learn from annotated data. 1
A Semantic Web Annotation Tool for a Web-Based Audio Sequencer
Music and sound have a rich semantic structure which is so clear to the composer and the listener, but that remains mostly hidden to computing machinery. Nevertheless, in recent years, the introduction of software tools for music production have enabled new opportunities for migrating this knowledge from humans to machines. A new generation of these tools may exploit sound samples and semantic information coupling for the creation not only of a musical, but also of a "semantic" composition. In this paper we describe an ontology driven content annotation framework for a web-based audio editing tool. In a supervised approach, during the editing process, the graphical web interface allows the user to annotate any part of the composition with concepts from publicly available ontologies. As a test case, we developed a collaborative web-based audio sequencer that provides users with the functionality to remix the audio samples from the Freesound website and subsequently annotate them. The annotation tool can load any ontology and thus gives users the opportunity to augment the work with annotations on the structure of the composition, the musical materials, and the creator's reasoning and intentions. We believe this approach will provide several novel ways to make not only the final audio product, but also the creative process, first class citizens of the Semantic We
The Semantic Web MIDI Tape: An Interface for Interlinking MIDI and Context Metadata
The Linked Data paradigm has been used to publish a large number of musical datasets and ontologies on the Semantic Web, such as MusicBrainz, AcousticBrainz, and the Music Ontology. Recently, the MIDI Linked Data Cloud has been added to these datasets, representing more than 300,000 pieces in MIDI format as Linked Data, opening up the possibility for linking fine-grained symbolic music representations to existing music metadata databases. Despite the dataset making MIDI resources available in Web data standard formats such as RDF and SPARQL, the important issue of finding meaningful links between these MIDI resources and relevant contextual metadata in other datasets remains. A fundamental barrier for the provision and generation of such links is the difficulty that users have at adding new MIDI performance data and metadata to the platform. In this paper, we propose the Semantic Web MIDI Tape, a set of tools and associated interface for interacting with the MIDI Linked Data Cloud by enabling users to record, enrich, and retrieve MIDI performance data and related metadata in native Web data standards. The goal of such interactions is to find meaningful links between published MIDI resources and their relevant contextual metadata. We evaluate the Semantic Web MIDI Tape in various use cases involving user-contributed content, MIDI similarity querying, and entity recognition methods, and discuss their potential for finding links between MIDI resources and metadata
Recommended from our members
Automatic Semantic Annotation of Music with Harmonic Structure
This paper presents an annotation model for harmonic structure of a piece of music, and a rule system that supports the automatic generation of harmonic annotations. Musical structure has so far received relatively little attention in the context of musical metadata and annotation, although it is highly relevant for musicians, musicologists and indirectly for music listeners. Activities in semantic annotation of music have so far mostly concentrated on features derived from audio data and file-level metadata. We have implemented a model and rule system for harmonic annotation as a starting point for semantic annotation of musical structure. Our model is for the musical style of Jazz, but the approach is not restricted to this style. The rule system describes a grammar that allows the fully automatic creation of an harmonic analysis as tree-structured annotations. We present a prototype ontology that defines the layers of harmonic analysis from chords symbols to the level of a complete piece. The annotation can be made on music in various formats, provided there is a way of addressing either chords or time points within the music. We argue that this approach, in connection with manual annotation, can support a number of application scenarios in music production, education, and retrieval and in musicology
Requirements for an Adaptive Multimedia Presentation System with Contextual Supplemental Support Media
Investigations into the requirements for a practical adaptive multimedia presentation system have led the writers to propose the use of a video segmentation process that provides contextual supplementary updates produced by users. Supplements consisting of tailored segments are dynamically inserted into previously stored material in response to questions from users. A proposal for the use of this technique is presented in the context of personalisation within a Virtual Learning Environment. During the investigation, a brief survey of advanced adaptive approaches revealed that adaptation may be enhanced by use of manually generated metadata, automated or semi-automated use of metadata by stored context dependent ontology hierarchies that describe the semantics of the learning domain. The use of neural networks or fuzzy logic filtering is a technique for future investigation. A prototype demonstrator is under construction
- âŠ