1,726 research outputs found

    An investigation into weighted data fusion for content-based multimedia information retrieval

    Get PDF
    Content Based Multimedia Information Retrieval (CBMIR) is characterised by the combination of noisy sources of information which, in unison, are able to achieve strong performance. In this thesis we focus on the combination of ranked results from the independent retrieval experts which comprise a CBMIR system through linearly weighted data fusion. The independent retrieval experts are low-level multimedia features, each of which contains an indexing function and ranking algorithm. This thesis is comprised of two halves. In the ïŹrst half, we perform a rigorous empirical investigation into the factors which impact upon performance in linearly weighted data fusion. In the second half, we leverage these ïŹnding to create a new class of weight generation algorithms for data fusion which are capable of determining weights at query-time, such that the weights are topic dependent

    Affective Music Information Retrieval

    Full text link
    Much of the appeal of music lies in its power to convey emotions/moods and to evoke them in listeners. In consequence, the past decade witnessed a growing interest in modeling emotions from musical signals in the music information retrieval (MIR) community. In this article, we present a novel generative approach to music emotion modeling, with a specific focus on the valence-arousal (VA) dimension model of emotion. The presented generative model, called \emph{acoustic emotion Gaussians} (AEG), better accounts for the subjectivity of emotion perception by the use of probability distributions. Specifically, it learns from the emotion annotations of multiple subjects a Gaussian mixture model in the VA space with prior constraints on the corresponding acoustic features of the training music pieces. Such a computational framework is technically sound, capable of learning in an online fashion, and thus applicable to a variety of applications, including user-independent (general) and user-dependent (personalized) emotion recognition and emotion-based music retrieval. We report evaluations of the aforementioned applications of AEG on a larger-scale emotion-annotated corpora, AMG1608, to demonstrate the effectiveness of AEG and to showcase how evaluations are conducted for research on emotion-based MIR. Directions of future work are also discussed.Comment: 40 pages, 18 figures, 5 tables, author versio

    Spoken content retrieval: A survey of techniques and technologies

    Get PDF
    Speech media, that is, digital audio and video containing spoken content, has blossomed in recent years. Large collections are accruing on the Internet as well as in private and enterprise settings. This growth has motivated extensive research on techniques and technologies that facilitate reliable indexing and retrieval. Spoken content retrieval (SCR) requires the combination of audio and speech processing technologies with methods from information retrieval (IR). SCR research initially investigated planned speech structured in document-like units, but has subsequently shifted focus to more informal spoken content produced spontaneously, outside of the studio and in conversational settings. This survey provides an overview of the field of SCR encompassing component technologies, the relationship of SCR to text IR and automatic speech recognition and user interaction issues. It is aimed at researchers with backgrounds in speech technology or IR who are seeking deeper insight on how these fields are integrated to support research and development, thus addressing the core challenges of SCR

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Word sense discrimination in information retrieval: a spectral clustering-based approach

    Get PDF
    International audienceWord sense ambiguity has been identified as a cause of poor precision in information retrieval (IR) systems. Word sense disambiguation and discrimination methods have been defined to help systems choose which documents should be retrieved in relation to an ambiguous query. However, the only approaches that show a genuine benefit for word sense discrimination or disambiguation in IR are generally supervised ones. In this paper we propose a new unsupervised method that uses word sense discrimination in IR. The method we develop is based on spectral clustering and reorders an initially retrieved document list by boosting documents that are semantically similar to the target query. For several TREC ad hoc collections we show that our method is useful in the case of queries which contain ambiguous terms. We are interested in improving the level of precision after 5, 10 and 30 retrieved documents (P@5, P@10, P@30) respectively. We show that precision can be improved by 8% above current state-of-the-art baselines. We also focus on poor performing queries

    Semantics of video shots for content-based retrieval

    Get PDF
    Content-based video retrieval research combines expertise from many different areas, such as signal processing, machine learning, pattern recognition, and computer vision. As video extends into both the spatial and the temporal domain, we require techniques for the temporal decomposition of footage so that specific content can be accessed. This content may then be semantically classified - ideally in an automated process - to enable filtering, browsing, and searching. An important aspect that must be considered is that pictorial representation of information may be interpreted differently by individual users because it is less specific than its textual representation. In this thesis, we address several fundamental issues of content-based video retrieval for effective handling of digital footage. Temporal segmentation, the common first step in handling digital video, is the decomposition of video streams into smaller, semantically coherent entities. This is usually performed by detecting the transitions that separate single camera takes. While abrupt transitions - cuts - can be detected relatively well with existing techniques, effective detection of gradual transitions remains difficult. We present our approach to temporal video segmentation, proposing a novel algorithm that evaluates sets of frames using a relatively simple histogram feature. Our technique has been shown to range among the best existing shot segmentation algorithms in large-scale evaluations. The next step is semantic classification of each video segment to generate an index for content-based retrieval in video databases. Machine learning techniques can be applied effectively to classify video content. However, these techniques require manually classified examples for training before automatic classification of unseen content can be carried out. Manually classifying training examples is not trivial because of the implied ambiguity of visual content. We propose an unsupervised learning approach based on latent class modelling in which we obtain multiple judgements per video shot and model the users' response behaviour over a large collection of shots. This technique yields a more generic classification of the visual content. Moreover, it enables the quality assessment of the classification, and maximises the number of training examples by resolving disagreement. We apply this approach to data from a large-scale, collaborative annotation effort and present ways to improve the effectiveness for manual annotation of visual content by better design and specification of the process. Automatic speech recognition techniques along with semantic classification of video content can be used to implement video search using textual queries. This requires the application of text search techniques to video and the combination of different information sources. We explore several text-based query expansion techniques for speech-based video retrieval, and propose a fusion method to improve overall effectiveness. To combine both text and visual search approaches, we explore a fusion technique that combines spoken information and visual information using semantic keywords automatically assigned to the footage based on the visual content. The techniques that we propose help to facilitate effective content-based video retrieval and highlight the importance of considering different user interpretations of visual content. This allows better understanding of video content and a more holistic approach to multimedia retrieval in the future

    Robust methods for Chinese spoken document retrieval.

    Get PDF
    Hui Pui Yu.Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.Includes bibliographical references (leaves 158-169).Abstracts in English and Chinese.Abstract --- p.2Acknowledgements --- p.6Chapter 1 --- Introduction --- p.23Chapter 1.1 --- Spoken Document Retrieval --- p.24Chapter 1.2 --- The Chinese Language and Chinese Spoken Documents --- p.28Chapter 1.3 --- Motivation --- p.33Chapter 1.3.1 --- Assisting the User in Query Formation --- p.34Chapter 1.4 --- Goals --- p.34Chapter 1.5 --- Thesis Organization --- p.35Chapter 2 --- Multimedia Repository --- p.37Chapter 2.1 --- The Cantonese Corpus --- p.37Chapter 2.1.1 --- The RealMedia®ёØCollection --- p.39Chapter 2.1.2 --- The MPEG-1 Collection --- p.40Chapter 2.2 --- The Multimedia Markup Language --- p.42Chapter 2.3 --- Chapter Summary --- p.44Chapter 3 --- Monolingual Retrieval Task --- p.45Chapter 3.1 --- Properties of Cantonese Video Archive --- p.45Chapter 3.2 --- Automatic Speech Transcription --- p.46Chapter 3.2.1 --- Transcription of Cantonese Spoken Documents --- p.47Chapter 3.2.2 --- Indexing Units --- p.48Chapter 3.3 --- Known-Item Retrieval Task --- p.49Chapter 3.3.1 --- Evaluation ÂŽŰ€ Average Inverse Rank --- p.50Chapter 3.4 --- Retrieval Model --- p.51Chapter 3.5 --- Experimental Results --- p.52Chapter 3.6 --- Chapter Summary --- p.53Chapter 4 --- The Use of Audio and Video Information for Monolingual Spoken Document Retrieval --- p.55Chapter 4.1 --- Video-based Segmentation --- p.56Chapter 4.1.1 --- Metric Computation --- p.57Chapter 4.1.2 --- Shot Boundary Detection --- p.58Chapter 4.1.3 --- Shot Transition Detection --- p.67Chapter 4.2 --- Audio-based Segmentation --- p.69Chapter 4.2.1 --- Gaussian Mixture Models --- p.69Chapter 4.2.2 --- Transition Detection --- p.70Chapter 4.3 --- Performance Evaluation --- p.72Chapter 4.3.1 --- Automatic Story Segmentation --- p.72Chapter 4.3.2 --- Video-based Segmentation Algorithm --- p.73Chapter 4.3.3 --- Audio-based Segmentation Algorithm --- p.74Chapter 4.4 --- Fusion of Video- and Audio-based Segmentation --- p.75Chapter 4.5 --- Retrieval Performance --- p.76Chapter 4.6 --- Chapter Summary --- p.78Chapter 5 --- Document Expansion for Monolingual Spoken Document Retrieval --- p.79Chapter 5.1 --- Document Expansion using Selected Field Speech Segments --- p.81Chapter 5.1.1 --- Annotations from MmML --- p.81Chapter 5.1.2 --- Selection of Cantonese Field Speech --- p.83Chapter 5.1.3 --- Re-weighting Different Retrieval Units --- p.84Chapter 5.1.4 --- Retrieval Performance with Document Expansion using Selected Field Speech --- p.84Chapter 5.2 --- Document Expansion using N-best Recognition Hypotheses --- p.87Chapter 5.2.1 --- Re-weighting Different Retrieval Units --- p.90Chapter 5.2.2 --- Retrieval Performance with Document Expansion using TV-best Recognition Hypotheses --- p.90Chapter 5.3 --- Document Expansion using Selected Field Speech and N-best Recognition Hypotheses --- p.92Chapter 5.3.1 --- Re-weighting Different Retrieval Units --- p.92Chapter 5.3.2 --- Retrieval Performance with Different Indexed Units --- p.93Chapter 5.4 --- Chapter Summary --- p.94Chapter 6 --- Query Expansion for Cross-language Spoken Document Retrieval --- p.97Chapter 6.1 --- The TDT-2 Corpus --- p.99Chapter 6.1.1 --- English Textual Queries --- p.100Chapter 6.1.2 --- Mandarin Spoken Documents --- p.101Chapter 6.2 --- Query Processing --- p.101Chapter 6.2.1 --- Query Weighting --- p.101Chapter 6.2.2 --- Bigram Formation --- p.102Chapter 6.3 --- Cross-language Retrieval Task --- p.103Chapter 6.3.1 --- Indexing Units --- p.104Chapter 6.3.2 --- Retrieval Model --- p.104Chapter 6.3.3 --- Performance Measure --- p.105Chapter 6.4 --- Relevance Feedback --- p.106Chapter 6.4.1 --- Pseudo-Relevance Feedback --- p.107Chapter 6.5 --- Retrieval Performance --- p.107Chapter 6.6 --- Chapter Summary --- p.109Chapter 7 --- Conclusions and Future Work --- p.111Chapter 7.1 --- Future Work --- p.114Chapter A --- XML Schema for Multimedia Markup Language --- p.117Chapter B --- Example of Multimedia Markup Language --- p.128Chapter C --- Significance Tests --- p.135Chapter C.1 --- Selection of Cantonese Field Speech Segments --- p.135Chapter C.2 --- Fusion of Video- and Audio-based Segmentation --- p.137Chapter C.3 --- Document Expansion with Reporter Speech --- p.137Chapter C.4 --- Document Expansion with N-best Recognition Hypotheses --- p.140Chapter C.5 --- Document Expansion with Reporter Speech and N-best Recognition Hypotheses --- p.140Chapter C.6 --- Query Expansion with Pseudo Relevance Feedback --- p.142Chapter D --- Topic Descriptions of TDT-2 Corpus --- p.145Chapter E --- Speech Recognition Output from Dragon in CLSDR Task --- p.148Chapter F --- Parameters Estimation --- p.152Chapter F.1 --- "Estimating the Number of Relevant Documents, Nr" --- p.152Chapter F.2 --- "Estimating the Number of Terms Added from Relevant Docu- ments, Nrt , to Original Query" --- p.153Chapter F.3 --- "Estimating the Number of Non-relevant Documents, Nn , from the Bottom-scoring Retrieval List" --- p.153Chapter F.4 --- "Estimating the Number of Terms, Selected from Non-relevant Documents (Nnt), to be Removed from Original Query" --- p.154Chapter G --- Abbreviations --- p.155Bibliography --- p.15
    • 

    corecore