1,951 research outputs found
Analysis of video sequences: table of content and index creation
This paper deals with the representation of video sequences useful
for tasks such as long-term analysis, indexing or browsing. A Table
Of Content and index creation algorithm is presented, as well as
additional tools involved in their creation. The proposed method
does not assume any a priori knowledge about the content or the
structure of the video. It is therefore a generic technique. Some
examples are presented in order to assess the performance of the
algorithmPeer ReviewedPostprint (published version
Using term clouds to represent segment-level semantic content of podcasts
Spoken audio, like any time-continuous medium, is notoriously difficult to browse or skim without support of an interface providing semantically annotated jump points to signal the user where to listen in. Creation of time-aligned metadata by human annotators is prohibitively expensive, motivating the investigation of representations of segment-level semantic content based on transcripts
generated by automatic speech recognition (ASR). This paper
examines the feasibility of using term clouds to provide users with a structured representation of the semantic content of podcast episodes. Podcast episodes are visualized as a series of sub-episode segments, each represented by a term cloud derived from a transcript
generated by automatic speech recognition (ASR). Quality of
segment-level term clouds is measured quantitatively and their utility is investigated using a small-scale user study based on human labeled segment boundaries. Since the segment-level clouds generated from ASR-transcripts prove useful, we examine an adaptation of text tiling techniques to speech in order to be able to generate segments as part of a completely automated indexing and structuring system for browsing of spoken audio. Results demonstrate that the segments generated are comparable with human selected segment boundaries
The aceToolbox: low-level audiovisual feature extraction for retrieval and classification
In this paper we present an overview of a software platform
that has been developed within the aceMedia project,
termed the aceToolbox, that provides global and local lowlevel feature extraction from audio-visual content. The toolbox is based on the MPEG-7 eXperimental Model (XM),
with extensions to provide descriptor extraction from arbitrarily shaped image segments, thereby supporting local descriptors reflecting real image content. We describe the architecture of the toolbox as well as providing an overview of the descriptors supported to date. We also briefly describe the segmentation algorithm provided. We then demonstrate the usefulness of the toolbox in the context of two different content processing scenarios: similarity-based retrieval in large collections and scene-level classification of still images
Associating characters with events in films
The work presented here combines the analysis of a film's audiovisual features with the analysis of an accompanying audio description. Specifically, we describe a technique for semantic-based indexing of feature films that associates character names with meaningful events. The technique fuses the results of event detection based on audiovisual features with the inferred on-screen presence of characters, based on an analysis of an audio description script. In an evaluation with 215 events from 11 films, the technique performed the character detection task with Precision = 93% and Recall = 71%. We then go on to show how novel access modes to film content are enabled by our analysis. The specific examples illustrated include video retrieval via a combination of event-type and character name and our first steps towards visualization of narrative and character interplay based on characters occurrence and co-occurrence in events
Robust audio indexing for Dutch spoken-word collections
Abstract—Whereas the growth of storage capacity is in accordance with widely acknowledged predictions, the possibilities to index and access the archives created is lagging behind. This is especially the case in the oral history domain and much of the rich content in these collections runs the risk to remain inaccessible for lack of robust search technologies. This paper addresses the history and development of robust audio indexing technology for searching Dutch spoken-word collections and compares Dutch audio indexing in the well-studied broadcast news domain with an oral-history case-study. It is concluded that despite significant advances in Dutch audio indexing technology and demonstrated applicability in several domains, further research is indispensable for successful automatic disclosure of spoken-word collections
A Sub-block Based Image Retrieval Using Modified Integrated Region Matching
This paper proposes a content based image retrieval (CBIR) system using the
local colour and texture features of selected image sub-blocks and global
colour and shape features of the image. The image sub-blocks are roughly
identified by segmenting the image into partitions of different configuration,
finding the edge density in each partition using edge thresholding followed by
morphological dilation. The colour and texture features of the identified
regions are computed from the histograms of the quantized HSV colour space and
Gray Level Co- occurrence Matrix (GLCM) respectively. The colour and texture
feature vectors is computed for each region. The shape features are computed
from the Edge Histogram Descriptor (EHD). A modified Integrated Region Matching
(IRM) algorithm is used for finding the minimum distance between the sub-blocks
of the query and target image. Experimental results show that the proposed
method provides better retrieving result than retrieval using some of the
existing methods.Comment: 7 page
- …