2,395 research outputs found

    Access to recorded interviews: A research agenda

    Get PDF
    Recorded interviews form a rich basis for scholarly inquiry. Examples include oral histories, community memory projects, and interviews conducted for broadcast media. Emerging technologies offer the potential to radically transform the way in which recorded interviews are made accessible, but this vision will demand substantial investments from a broad range of research communities. This article reviews the present state of practice for making recorded interviews available and the state-of-the-art for key component technologies. A large number of important research issues are identified, and from that set of issues, a coherent research agenda is proposed

    Speaker segmentation and clustering

    Get PDF
    This survey focuses on two challenging speech processing topics, namely: speaker segmentation and speaker clustering. Speaker segmentation aims at finding speaker change points in an audio stream, whereas speaker clustering aims at grouping speech segments based on speaker characteristics. Model-based, metric-based, and hybrid speaker segmentation algorithms are reviewed. Concerning speaker clustering, deterministic and probabilistic algorithms are examined. A comparative assessment of the reviewed algorithms is undertaken, the algorithm advantages and disadvantages are indicated, insight to the algorithms is offered, and deductions as well as recommendations are given. Rich transcription and movie analysis are candidate applications that benefit from combined speaker segmentation and clustering. © 2007 Elsevier B.V. All rights reserved

    Visual Information Retrieval in Digital Libraries

    Get PDF
    The emergence of information highways and multimedia computing has resulted in redefining the concept of libraries. It is widely believed that in the next few years, a significant portion of information in libraries will be in the form of multimedia electronic documents. Many approaches are being proposed for storing, retrieving, assimilating, harvesting, and prospecting information from these multimedia documents. Digital libraries are expected to allow users to access information independent of the locations and types of data sources and will provide a unified picture of information. In this paper, we discuss requirements of these emerging information systems and present query methods and data models for these systems. Finally, we briefly present a few examples of approaches that provide a preview of how things will be done in the digital libraries in the near future.published or submitted for publicatio

    Semantic multimedia modelling & interpretation for annotation

    Get PDF
    The emergence of multimedia enabled devices, particularly the incorporation of cameras in mobile phones, and the accelerated revolutions in the low cost storage devices, boosts the multimedia data production rate drastically. Witnessing such an iniquitousness of digital images and videos, the research community has been projecting the issue of its significant utilization and management. Stored in monumental multimedia corpora, digital data need to be retrieved and organized in an intelligent way, leaning on the rich semantics involved. The utilization of these image and video collections demands proficient image and video annotation and retrieval techniques. Recently, the multimedia research community is progressively veering its emphasis to the personalization of these media. The main impediment in the image and video analysis is the semantic gap, which is the discrepancy among a user’s high-level interpretation of an image and the video and the low level computational interpretation of it. Content-based image and video annotation systems are remarkably susceptible to the semantic gap due to their reliance on low-level visual features for delineating semantically rich image and video contents. However, the fact is that the visual similarity is not semantic similarity, so there is a demand to break through this dilemma through an alternative way. The semantic gap can be narrowed by counting high-level and user-generated information in the annotation. High-level descriptions of images and or videos are more proficient of capturing the semantic meaning of multimedia content, but it is not always applicable to collect this information. It is commonly agreed that the problem of high level semantic annotation of multimedia is still far from being answered. This dissertation puts forward approaches for intelligent multimedia semantic extraction for high level annotation. This dissertation intends to bridge the gap between the visual features and semantics. It proposes a framework for annotation enhancement and refinement for the object/concept annotated images and videos datasets. The entire theme is to first purify the datasets from noisy keyword and then expand the concepts lexically and commonsensical to fill the vocabulary and lexical gap to achieve high level semantics for the corpus. This dissertation also explored a novel approach for high level semantic (HLS) propagation through the images corpora. The HLS propagation takes the advantages of the semantic intensity (SI), which is the concept dominancy factor in the image and annotation based semantic similarity of the images. As we are aware of the fact that the image is the combination of various concepts and among the list of concepts some of them are more dominant then the other, while semantic similarity of the images are based on the SI and concept semantic similarity among the pair of images. Moreover, the HLS exploits the clustering techniques to group similar images, where a single effort of the human experts to assign high level semantic to a randomly selected image and propagate to other images through clustering. The investigation has been made on the LabelMe image and LabelMe video dataset. Experiments exhibit that the proposed approaches perform a noticeable improvement towards bridging the semantic gap and reveal that our proposed system outperforms the traditional systems

    Semi-automatic video object segmentation for multimedia applications

    Get PDF
    A semi-automatic video object segmentation tool is presented for segmenting both still pictures and image sequences. The approach comprises both automatic segmentation algorithms and manual user interaction. The still image segmentation component is comprised of a conventional spatial segmentation algorithm (Recursive Shortest Spanning Tree (RSST)), a hierarchical segmentation representation method (Binary Partition Tree (BPT)), and user interaction. An initial segmentation partition of homogeneous regions is created using RSST. The BPT technique is then used to merge these regions and hierarchically represent the segmentation in a binary tree. The semantic objects are then manually built by selectively clicking on image regions. A video object-tracking component enables image sequence segmentation, and this subsystem is based on motion estimation, spatial segmentation, object projection, region classification, and user interaction. The motion between the previous frame and the current frame is estimated, and the previous object is then projected onto the current partition. A region classification technique is used to determine which regions in the current partition belong to the projected object. User interaction is allowed for object re-initialisation when the segmentation results become inaccurate. The combination of all these components enables offline video sequence segmentation. The results presented on standard test sequences illustrate the potential use of this system for object-based coding and representation of multimedia

    Semantics of video shots for content-based retrieval

    Get PDF
    Content-based video retrieval research combines expertise from many different areas, such as signal processing, machine learning, pattern recognition, and computer vision. As video extends into both the spatial and the temporal domain, we require techniques for the temporal decomposition of footage so that specific content can be accessed. This content may then be semantically classified - ideally in an automated process - to enable filtering, browsing, and searching. An important aspect that must be considered is that pictorial representation of information may be interpreted differently by individual users because it is less specific than its textual representation. In this thesis, we address several fundamental issues of content-based video retrieval for effective handling of digital footage. Temporal segmentation, the common first step in handling digital video, is the decomposition of video streams into smaller, semantically coherent entities. This is usually performed by detecting the transitions that separate single camera takes. While abrupt transitions - cuts - can be detected relatively well with existing techniques, effective detection of gradual transitions remains difficult. We present our approach to temporal video segmentation, proposing a novel algorithm that evaluates sets of frames using a relatively simple histogram feature. Our technique has been shown to range among the best existing shot segmentation algorithms in large-scale evaluations. The next step is semantic classification of each video segment to generate an index for content-based retrieval in video databases. Machine learning techniques can be applied effectively to classify video content. However, these techniques require manually classified examples for training before automatic classification of unseen content can be carried out. Manually classifying training examples is not trivial because of the implied ambiguity of visual content. We propose an unsupervised learning approach based on latent class modelling in which we obtain multiple judgements per video shot and model the users' response behaviour over a large collection of shots. This technique yields a more generic classification of the visual content. Moreover, it enables the quality assessment of the classification, and maximises the number of training examples by resolving disagreement. We apply this approach to data from a large-scale, collaborative annotation effort and present ways to improve the effectiveness for manual annotation of visual content by better design and specification of the process. Automatic speech recognition techniques along with semantic classification of video content can be used to implement video search using textual queries. This requires the application of text search techniques to video and the combination of different information sources. We explore several text-based query expansion techniques for speech-based video retrieval, and propose a fusion method to improve overall effectiveness. To combine both text and visual search approaches, we explore a fusion technique that combines spoken information and visual information using semantic keywords automatically assigned to the footage based on the visual content. The techniques that we propose help to facilitate effective content-based video retrieval and highlight the importance of considering different user interpretations of visual content. This allows better understanding of video content and a more holistic approach to multimedia retrieval in the future

    The Development and Utilization of a Radio Station in the Secondary School

    Get PDF
    Not available

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    Robust methods for Chinese spoken document retrieval.

    Get PDF
    Hui Pui Yu.Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.Includes bibliographical references (leaves 158-169).Abstracts in English and Chinese.Abstract --- p.2Acknowledgements --- p.6Chapter 1 --- Introduction --- p.23Chapter 1.1 --- Spoken Document Retrieval --- p.24Chapter 1.2 --- The Chinese Language and Chinese Spoken Documents --- p.28Chapter 1.3 --- Motivation --- p.33Chapter 1.3.1 --- Assisting the User in Query Formation --- p.34Chapter 1.4 --- Goals --- p.34Chapter 1.5 --- Thesis Organization --- p.35Chapter 2 --- Multimedia Repository --- p.37Chapter 2.1 --- The Cantonese Corpus --- p.37Chapter 2.1.1 --- The RealMedia´ёØCollection --- p.39Chapter 2.1.2 --- The MPEG-1 Collection --- p.40Chapter 2.2 --- The Multimedia Markup Language --- p.42Chapter 2.3 --- Chapter Summary --- p.44Chapter 3 --- Monolingual Retrieval Task --- p.45Chapter 3.1 --- Properties of Cantonese Video Archive --- p.45Chapter 3.2 --- Automatic Speech Transcription --- p.46Chapter 3.2.1 --- Transcription of Cantonese Spoken Documents --- p.47Chapter 3.2.2 --- Indexing Units --- p.48Chapter 3.3 --- Known-Item Retrieval Task --- p.49Chapter 3.3.1 --- Evaluation ´ؤ Average Inverse Rank --- p.50Chapter 3.4 --- Retrieval Model --- p.51Chapter 3.5 --- Experimental Results --- p.52Chapter 3.6 --- Chapter Summary --- p.53Chapter 4 --- The Use of Audio and Video Information for Monolingual Spoken Document Retrieval --- p.55Chapter 4.1 --- Video-based Segmentation --- p.56Chapter 4.1.1 --- Metric Computation --- p.57Chapter 4.1.2 --- Shot Boundary Detection --- p.58Chapter 4.1.3 --- Shot Transition Detection --- p.67Chapter 4.2 --- Audio-based Segmentation --- p.69Chapter 4.2.1 --- Gaussian Mixture Models --- p.69Chapter 4.2.2 --- Transition Detection --- p.70Chapter 4.3 --- Performance Evaluation --- p.72Chapter 4.3.1 --- Automatic Story Segmentation --- p.72Chapter 4.3.2 --- Video-based Segmentation Algorithm --- p.73Chapter 4.3.3 --- Audio-based Segmentation Algorithm --- p.74Chapter 4.4 --- Fusion of Video- and Audio-based Segmentation --- p.75Chapter 4.5 --- Retrieval Performance --- p.76Chapter 4.6 --- Chapter Summary --- p.78Chapter 5 --- Document Expansion for Monolingual Spoken Document Retrieval --- p.79Chapter 5.1 --- Document Expansion using Selected Field Speech Segments --- p.81Chapter 5.1.1 --- Annotations from MmML --- p.81Chapter 5.1.2 --- Selection of Cantonese Field Speech --- p.83Chapter 5.1.3 --- Re-weighting Different Retrieval Units --- p.84Chapter 5.1.4 --- Retrieval Performance with Document Expansion using Selected Field Speech --- p.84Chapter 5.2 --- Document Expansion using N-best Recognition Hypotheses --- p.87Chapter 5.2.1 --- Re-weighting Different Retrieval Units --- p.90Chapter 5.2.2 --- Retrieval Performance with Document Expansion using TV-best Recognition Hypotheses --- p.90Chapter 5.3 --- Document Expansion using Selected Field Speech and N-best Recognition Hypotheses --- p.92Chapter 5.3.1 --- Re-weighting Different Retrieval Units --- p.92Chapter 5.3.2 --- Retrieval Performance with Different Indexed Units --- p.93Chapter 5.4 --- Chapter Summary --- p.94Chapter 6 --- Query Expansion for Cross-language Spoken Document Retrieval --- p.97Chapter 6.1 --- The TDT-2 Corpus --- p.99Chapter 6.1.1 --- English Textual Queries --- p.100Chapter 6.1.2 --- Mandarin Spoken Documents --- p.101Chapter 6.2 --- Query Processing --- p.101Chapter 6.2.1 --- Query Weighting --- p.101Chapter 6.2.2 --- Bigram Formation --- p.102Chapter 6.3 --- Cross-language Retrieval Task --- p.103Chapter 6.3.1 --- Indexing Units --- p.104Chapter 6.3.2 --- Retrieval Model --- p.104Chapter 6.3.3 --- Performance Measure --- p.105Chapter 6.4 --- Relevance Feedback --- p.106Chapter 6.4.1 --- Pseudo-Relevance Feedback --- p.107Chapter 6.5 --- Retrieval Performance --- p.107Chapter 6.6 --- Chapter Summary --- p.109Chapter 7 --- Conclusions and Future Work --- p.111Chapter 7.1 --- Future Work --- p.114Chapter A --- XML Schema for Multimedia Markup Language --- p.117Chapter B --- Example of Multimedia Markup Language --- p.128Chapter C --- Significance Tests --- p.135Chapter C.1 --- Selection of Cantonese Field Speech Segments --- p.135Chapter C.2 --- Fusion of Video- and Audio-based Segmentation --- p.137Chapter C.3 --- Document Expansion with Reporter Speech --- p.137Chapter C.4 --- Document Expansion with N-best Recognition Hypotheses --- p.140Chapter C.5 --- Document Expansion with Reporter Speech and N-best Recognition Hypotheses --- p.140Chapter C.6 --- Query Expansion with Pseudo Relevance Feedback --- p.142Chapter D --- Topic Descriptions of TDT-2 Corpus --- p.145Chapter E --- Speech Recognition Output from Dragon in CLSDR Task --- p.148Chapter F --- Parameters Estimation --- p.152Chapter F.1 --- "Estimating the Number of Relevant Documents, Nr" --- p.152Chapter F.2 --- "Estimating the Number of Terms Added from Relevant Docu- ments, Nrt , to Original Query" --- p.153Chapter F.3 --- "Estimating the Number of Non-relevant Documents, Nn , from the Bottom-scoring Retrieval List" --- p.153Chapter F.4 --- "Estimating the Number of Terms, Selected from Non-relevant Documents (Nnt), to be Removed from Original Query" --- p.154Chapter G --- Abbreviations --- p.155Bibliography --- p.15
    corecore