12,177 research outputs found

    Embedding Web-based Statistical Translation Models in Cross-Language Information Retrieval

    Get PDF
    Although more and more language pairs are covered by machine translation services, there are still many pairs that lack translation resources. Cross-language information retrieval (CLIR) is an application which needs translation functionality of a relatively low level of sophistication since current models for information retrieval (IR) are still based on a bag-of-words. The Web provides a vast resource for the automatic construction of parallel corpora which can be used to train statistical translation models automatically. The resulting translation models can be embedded in several ways in a retrieval model. In this paper, we will investigate the problem of automatically mining parallel texts from the Web and different ways of integrating the translation models within the retrieval process. Our experiments on standard test collections for CLIR show that the Web-based translation models can surpass commercial MT systems in CLIR tasks. These results open the perspective of constructing a fully automatic query translation device for CLIR at a very low cost.Comment: 37 page

    Spoken content retrieval: A survey of techniques and technologies

    Get PDF
    Speech media, that is, digital audio and video containing spoken content, has blossomed in recent years. Large collections are accruing on the Internet as well as in private and enterprise settings. This growth has motivated extensive research on techniques and technologies that facilitate reliable indexing and retrieval. Spoken content retrieval (SCR) requires the combination of audio and speech processing technologies with methods from information retrieval (IR). SCR research initially investigated planned speech structured in document-like units, but has subsequently shifted focus to more informal spoken content produced spontaneously, outside of the studio and in conversational settings. This survey provides an overview of the field of SCR encompassing component technologies, the relationship of SCR to text IR and automatic speech recognition and user interaction issues. It is aimed at researchers with backgrounds in speech technology or IR who are seeking deeper insight on how these fields are integrated to support research and development, thus addressing the core challenges of SCR

    The Role of Text Pre-processing in Sentiment Analysis

    Get PDF
    It is challenging to understand the latest trends and summarise the state or general opinions about products due to the big diversity and size of social media data, and this creates the need of automated and real time opinion extraction and mining. Mining online opinion is a form of sentiment analysis that is treated as a difficult text classification task. In this paper, we explore the role of text pre-processing in sentiment analysis, and report on experimental results that demonstrate that with appropriate feature selection and representation, sentiment analysis accuracies using support vector machines (SVM) in this area may be significantly improved. The level of accuracy achieved is shown to be comparable to the ones achieved in topic categorisation although sentiment analysis is considered to be a much harder problem in the literature

    Robust methods for Chinese spoken document retrieval.

    Get PDF
    Hui Pui Yu.Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.Includes bibliographical references (leaves 158-169).Abstracts in English and Chinese.Abstract --- p.2Acknowledgements --- p.6Chapter 1 --- Introduction --- p.23Chapter 1.1 --- Spoken Document Retrieval --- p.24Chapter 1.2 --- The Chinese Language and Chinese Spoken Documents --- p.28Chapter 1.3 --- Motivation --- p.33Chapter 1.3.1 --- Assisting the User in Query Formation --- p.34Chapter 1.4 --- Goals --- p.34Chapter 1.5 --- Thesis Organization --- p.35Chapter 2 --- Multimedia Repository --- p.37Chapter 2.1 --- The Cantonese Corpus --- p.37Chapter 2.1.1 --- The RealMedia´ёØCollection --- p.39Chapter 2.1.2 --- The MPEG-1 Collection --- p.40Chapter 2.2 --- The Multimedia Markup Language --- p.42Chapter 2.3 --- Chapter Summary --- p.44Chapter 3 --- Monolingual Retrieval Task --- p.45Chapter 3.1 --- Properties of Cantonese Video Archive --- p.45Chapter 3.2 --- Automatic Speech Transcription --- p.46Chapter 3.2.1 --- Transcription of Cantonese Spoken Documents --- p.47Chapter 3.2.2 --- Indexing Units --- p.48Chapter 3.3 --- Known-Item Retrieval Task --- p.49Chapter 3.3.1 --- Evaluation ´ؤ Average Inverse Rank --- p.50Chapter 3.4 --- Retrieval Model --- p.51Chapter 3.5 --- Experimental Results --- p.52Chapter 3.6 --- Chapter Summary --- p.53Chapter 4 --- The Use of Audio and Video Information for Monolingual Spoken Document Retrieval --- p.55Chapter 4.1 --- Video-based Segmentation --- p.56Chapter 4.1.1 --- Metric Computation --- p.57Chapter 4.1.2 --- Shot Boundary Detection --- p.58Chapter 4.1.3 --- Shot Transition Detection --- p.67Chapter 4.2 --- Audio-based Segmentation --- p.69Chapter 4.2.1 --- Gaussian Mixture Models --- p.69Chapter 4.2.2 --- Transition Detection --- p.70Chapter 4.3 --- Performance Evaluation --- p.72Chapter 4.3.1 --- Automatic Story Segmentation --- p.72Chapter 4.3.2 --- Video-based Segmentation Algorithm --- p.73Chapter 4.3.3 --- Audio-based Segmentation Algorithm --- p.74Chapter 4.4 --- Fusion of Video- and Audio-based Segmentation --- p.75Chapter 4.5 --- Retrieval Performance --- p.76Chapter 4.6 --- Chapter Summary --- p.78Chapter 5 --- Document Expansion for Monolingual Spoken Document Retrieval --- p.79Chapter 5.1 --- Document Expansion using Selected Field Speech Segments --- p.81Chapter 5.1.1 --- Annotations from MmML --- p.81Chapter 5.1.2 --- Selection of Cantonese Field Speech --- p.83Chapter 5.1.3 --- Re-weighting Different Retrieval Units --- p.84Chapter 5.1.4 --- Retrieval Performance with Document Expansion using Selected Field Speech --- p.84Chapter 5.2 --- Document Expansion using N-best Recognition Hypotheses --- p.87Chapter 5.2.1 --- Re-weighting Different Retrieval Units --- p.90Chapter 5.2.2 --- Retrieval Performance with Document Expansion using TV-best Recognition Hypotheses --- p.90Chapter 5.3 --- Document Expansion using Selected Field Speech and N-best Recognition Hypotheses --- p.92Chapter 5.3.1 --- Re-weighting Different Retrieval Units --- p.92Chapter 5.3.2 --- Retrieval Performance with Different Indexed Units --- p.93Chapter 5.4 --- Chapter Summary --- p.94Chapter 6 --- Query Expansion for Cross-language Spoken Document Retrieval --- p.97Chapter 6.1 --- The TDT-2 Corpus --- p.99Chapter 6.1.1 --- English Textual Queries --- p.100Chapter 6.1.2 --- Mandarin Spoken Documents --- p.101Chapter 6.2 --- Query Processing --- p.101Chapter 6.2.1 --- Query Weighting --- p.101Chapter 6.2.2 --- Bigram Formation --- p.102Chapter 6.3 --- Cross-language Retrieval Task --- p.103Chapter 6.3.1 --- Indexing Units --- p.104Chapter 6.3.2 --- Retrieval Model --- p.104Chapter 6.3.3 --- Performance Measure --- p.105Chapter 6.4 --- Relevance Feedback --- p.106Chapter 6.4.1 --- Pseudo-Relevance Feedback --- p.107Chapter 6.5 --- Retrieval Performance --- p.107Chapter 6.6 --- Chapter Summary --- p.109Chapter 7 --- Conclusions and Future Work --- p.111Chapter 7.1 --- Future Work --- p.114Chapter A --- XML Schema for Multimedia Markup Language --- p.117Chapter B --- Example of Multimedia Markup Language --- p.128Chapter C --- Significance Tests --- p.135Chapter C.1 --- Selection of Cantonese Field Speech Segments --- p.135Chapter C.2 --- Fusion of Video- and Audio-based Segmentation --- p.137Chapter C.3 --- Document Expansion with Reporter Speech --- p.137Chapter C.4 --- Document Expansion with N-best Recognition Hypotheses --- p.140Chapter C.5 --- Document Expansion with Reporter Speech and N-best Recognition Hypotheses --- p.140Chapter C.6 --- Query Expansion with Pseudo Relevance Feedback --- p.142Chapter D --- Topic Descriptions of TDT-2 Corpus --- p.145Chapter E --- Speech Recognition Output from Dragon in CLSDR Task --- p.148Chapter F --- Parameters Estimation --- p.152Chapter F.1 --- "Estimating the Number of Relevant Documents, Nr" --- p.152Chapter F.2 --- "Estimating the Number of Terms Added from Relevant Docu- ments, Nrt , to Original Query" --- p.153Chapter F.3 --- "Estimating the Number of Non-relevant Documents, Nn , from the Bottom-scoring Retrieval List" --- p.153Chapter F.4 --- "Estimating the Number of Terms, Selected from Non-relevant Documents (Nnt), to be Removed from Original Query" --- p.154Chapter G --- Abbreviations --- p.155Bibliography --- p.15

    Positive and Negative Sentiment Words in a Blog Corpus Written in Hebrew

    Get PDF
    AbstractIn this research, given a corpus containing blog posts written in Hebrew and two seed sentiment lists, we analyze the positive and negative sentences included in the corpus, and special groups of words that are associated with the positive and negative seed words. We discovered many new negative words (around half of the top 50 words) but only one positive word. Among the top words that are associated with the positive seed words, we discovered various first-person and third-person pronouns. Intensifiers were found for both the positive and negative seed words. Most of the corpus’ sentences are neutral. For the rest, the rate of positive sentences is above 80%. The sentiment scores of the top words that are associated with the positive words are significantly higher than those of the top words that are associated with the negative words.Our conclusions are as follows. Positive sentences more “refer to” the authors themselves (first-person pronouns and related words) and are also more general, e.g., more related to other people (third-person pronouns), while negative sentences are much more concentrated on negative things and therefore contain many new negative words. Israeli bloggers tend to use intensifiers in order to emphasize or even exaggerate their sentiment opinions (both positive and negative). These bloggers not only write much more positive sentences than negative sentences, but also write much longer positive sentences than negative sentences
    corecore