1,865 research outputs found

    Sub-word indexing and blind relevance feedback for English, Bengali, Hindi, and Marathi IR

    Get PDF
    The Forum for Information Retrieval Evaluation (FIRE) provides document collections, topics, and relevance assessments for information retrieval (IR) experiments on Indian languages. Several research questions are explored in this paper: 1. how to create create a simple, languageindependent corpus-based stemmer, 2. how to identify sub-words and which types of sub-words are suitable as indexing units, and 3. how to apply blind relevance feedback on sub-words and how feedback term selection is affected by the type of the indexing unit. More than 140 IR experiments are conducted using the BM25 retrieval model on the topic titles and descriptions (TD) for the FIRE 2008 English, Bengali, Hindi, and Marathi document collections. The major findings are: The corpus-based stemming approach is effective as a knowledge-light term conation step and useful in case of few language-specific resources. For English, the corpusbased stemmer performs nearly as well as the Porter stemmer and significantly better than the baseline of indexing words when combined with query expansion. In combination with blind relevance feedback, it also performs significantly better than the baseline for Bengali and Marathi IR. Sub-words such as consonant-vowel sequences and word prefixes can yield similar or better performance in comparison to word indexing. There is no best performing method for all languages. For English, indexing using the Porter stemmer performs best, for Bengali and Marathi, overlapping 3-grams obtain the best result, and for Hindi, 4-prefixes yield the highest MAP. However, in combination with blind relevance feedback using 10 documents and 20 terms, 6-prefixes for English and 4-prefixes for Bengali, Hindi, and Marathi IR yield the highest MAP. Sub-word identification is a general case of decompounding. It results in one or more index terms for a single word form and increases the number of index terms but decreases their average length. The corresponding retrieval experiments show that relevance feedback on sub-words benefits from selecting a larger number of index terms in comparison with retrieval on word forms. Similarly, selecting the number of relevance feedback terms depending on the ratio of word vocabulary size to sub-word vocabulary size almost always slightly increases information retrieval effectiveness compared to using a fixed number of terms for different languages

    A study on n-gram indexing of musical features

    Get PDF
    Since only simple symbol-based manipulations are needed, n-gram indexing is used for natural languages where syntactic or semantic analyses are often difficult. Music, whose automatic analysis for patterns such as motifs and phrases are difficult, inaccurate or computationally expensive, is thus similar to natural languages. The use of n-gram in music retrieval systems is thus a natural choice. In this paper, we study a number of issues regarding n-gram indexing of musical features using simulated queries. They are: whether combinatorial explosion is a problem in n-gram indexing of musical features, the relative discrimination power of six different musical features, the value of n needed for them, and the average amount of false positives returned when n-grams are used to index music.published_or_final_versio

    Japanese/English Cross-Language Information Retrieval: Exploration of Query Translation and Transliteration

    Full text link
    Cross-language information retrieval (CLIR), where queries and documents are in different languages, has of late become one of the major topics within the information retrieval community. This paper proposes a Japanese/English CLIR system, where we combine a query translation and retrieval modules. We currently target the retrieval of technical documents, and therefore the performance of our system is highly dependent on the quality of the translation of technical terms. However, the technical term translation is still problematic in that technical terms are often compound words, and thus new terms are progressively created by combining existing base words. In addition, Japanese often represents loanwords based on its special phonogram. Consequently, existing dictionaries find it difficult to achieve sufficient coverage. To counter the first problem, we produce a Japanese/English dictionary for base words, and translate compound words on a word-by-word basis. We also use a probabilistic method to resolve translation ambiguity. For the second problem, we use a transliteration method, which corresponds words unlisted in the base word dictionary to their phonetic equivalents in the target language. We evaluate our system using a test collection for CLIR, and show that both the compound word translation and transliteration methods improve the system performance

    Searching strategies for the Bulgarian language

    Get PDF
    This paper reports on the underlying IR problems encountered when indexing and searching with the Bulgarian language. For this language we propose a general light stemmer and demonstrate that it can be quite effective, producing significantly better MAP (around + 34%) than an approach not applying stemming. We implement the GL2 model derived from the Divergence from Randomness paradigm and find its retrieval effectiveness better than other probabilistic, vector-space and language models. The resulting MAP is found to be about 50% better than the classical tf idf approach. Moreover, increasing the query size enhances the MAP by around 10% (from T to TD). In order to compare the retrieval effectiveness of our suggested stopword list and the light stemmer developed for the Bulgarian language, we conduct a set of experiments on another stopword list and also a more complex and aggressive stemmer. Results tend to indicate that there is no statistically significant difference between these variants and our suggested approach. This paper evaluates other indexing strategies such as 4-gram indexing and indexing based on the automatic decompounding of compound words. Finally, we analyze certain queries to discover why we obtained poor results, when indexing Bulgarian documents using the suggested word-based approac

    Sentiment Polarity Classification of Comments on Korean News Articles Using Feature Reweighting

    Get PDF
    ์ผ๋ฐ˜์ ์œผ๋กœ ์ธํ„ฐ๋„ท ์‹ ๋ฌธ ๊ธฐ์‚ฌ์— ๋Œ€ํ•œ ๋Œ“๊ธ€์€ ๊ทธ ์‹ ๋ฌธ ๊ธฐ์‚ฌ์— ๋Œ€ํ•œ ์ฃผ๊ด€์ ์ธ ๊ฐ์ •์ด๋‚˜ ์˜๊ฒฌ์„ ํฌํ•จํ•˜๊ณ  ์žˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด๋Ÿฐ ์‹ ๋ฌธ ๊ธฐ์‚ฌ์˜ ๋Œ“๊ธ€์— ๋Œ€ํ•œ ๊ฐ์ •์„ ์ธ์‹ํ•˜๊ณ  ๋ถ„๋ฅ˜ํ•˜๋Š” ๋ฐ์—๋Š” ๊ทธ ์‹ ๋ฌธ ๊ธฐ์‚ฌ์˜ ์›๋ฌธ ๋‚ด์šฉ์ด ์ค‘์š”ํ•œ ์˜ํ–ฅ์„ ๋ฏธ์นœ๋‹ค. ์ด๋Ÿฐ ์ ์— ์ฐฉ์•ˆํ•˜์—ฌ ๋ณธ ๋…ผ๋ฌธ์€ ๊ธฐ์‚ฌ์˜ ์›๋ฌธ ๋‚ด์šฉ๊ณผ ๊ฐ์ • ์‚ฌ์ „์„ ์ด์šฉํ•˜๋Š” ๊ฐ€์ค‘์น˜ ์กฐ์ • ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•˜๊ณ , ์ œ์•ˆ๋œ ๊ฐ€์ค‘์น˜ ์กฐ์ • ๋ฐฉ๋ฒ•์„ ์ด์šฉํ•ด์„œ ํ•œ๊ตญ์–ด ์‹ ๋ฌธ ๊ธฐ์‚ฌ์˜ ๋Œ“๊ธ€์— ๋Œ€ํ•œ ๊ฐ์ • ์ด์ง„ ๋ถ„๋ฅ˜ ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๊ฐ€์ค‘์น˜ ์กฐ์ • ๋ฐฉ๋ฒ•์—๋Š” ๋‹ค์–‘ํ•œ ์ž์งˆ ์ง‘ํ•ฉ์ด ์‚ฌ์šฉ๋˜๋Š”๋ฐ ๊ทธ๊ฒƒ์€ ๋Œ“๊ธ€์— ํฌํ•จ๋œ ๊ฐ์ • ๋‹จ์–ด, ๊ทธ๋ฆฌ๊ณ  ๊ฐ์ • ์‚ฌ์ „๊ณผ ๋‰ด์Šค ๊ธฐ์‚ฌ์˜ ๋ณธ๋ฌธ์— ๊ด€๋ จ๋œ ์ž์งˆ๋“ค, ๋งˆ์ง€๋ง‰์œผ๋กœ ๋‰ด์Šค ๊ธฐ์‚ฌ์˜ ์นดํ…Œ๊ณ ๋ฆฌ ์ •๋ณด๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ๋‹ค. ์—ฌ๊ธฐ์„œ ๋งํ•˜๋Š” ๊ฐ์ • ์‚ฌ์ „์€ ํ•œ๊ตญ์–ด ๊ฐ์ • ์‚ฌ์ „์„ ์˜๋ฏธํ•˜๋ฉฐ ์•„์ง ๊ณต๊ฐœ๋œ ๊ฒƒ์ด ์—†๊ธฐ ๋•Œ๋ฌธ์—, ๊ธฐ์กด์— ์žˆ๋Š” ์˜์–ด ๊ฐ์ • ์‚ฌ์ „์„ ์ด์šฉํ•˜์—ฌ ๊ตฌ์ถ•ํ•˜์˜€๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆ๋œ ๊ฐ์ • ์ด์ง„ ๋ถ„๋ฅ˜๋Š” ๊ธฐ๊ณ„ ํ•™์Šต์„ ์ด์šฉํ•œ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ๊ธฐ๊ณ„ ํ•™์Šต์„ ์œ„ํ•ด์„œ๋Š” ํ•™์Šต ๋ง๋ญ‰์น˜๊ฐ€ ํ•„์š”ํ•œ๋ฐ ํŠน๋ณ„ํžˆ ๊ฐ์ • ๋ถ„๋ฅ˜ ๋ฌธ์ œ์—์„œ๋Š” ๊ธ์ • ํ˜น์€ ๋ถ€์ • ๊ฐ์ • ํƒœ๊ทธ๊ฐ€ ๋ถ€์ฐฉ๋œ ๋ง๋ญ‰์น˜๊ฐ€ ํ•„์š”ํ•˜๋‹ค. ์ด ๋ง๋ญ‰์น˜์˜ ๊ฒฝ์šฐ๋„, ๊ณต๊ฐœ๋œ ํ•œ๊ตญ์–ด ๊ฐ์ • ๋ง๋ญ‰์น˜๊ฐ€ ์•„์ง ์—†๊ธฐ ๋•Œ๋ฌธ์— ๋ง๋ญ‰์น˜๋ฅผ ์ง์ ‘ ๊ตฌ์ถ•ํ•˜์˜€๋‹ค. ์‚ฌ์šฉ๋œ ๊ธฐ๊ณ„ ํ•™์Šต ๋ฐฉ๋ฒ•์œผ๋กœ๋Š” Na&iumlve Bayes, k-NN, SVM์ด ์žˆ๊ณ , ์ž์งˆ ์„ ํƒ ๋ฐฉ๋ฒ•์œผ๋กœ๋Š” Document Frequency, ฯ‡^2 statistic, Information Gain์ด ์žˆ๋‹ค. ๊ทธ ๊ฒฐ๊ณผ, ๋Œ“๊ธ€ ์•ˆ์— ํฌํ•จ๋œ ๊ฐ์ • ๋‹จ์–ด์™€ ๊ทธ ๋Œ“๊ธ€์— ๋Œ€ํ•œ ๊ธฐ์‚ฌ ๋ณธ๋ฌธ์ด ๊ฐ์ • ๋ถ„๋ฅ˜์— ๋งค์šฐ ํšจ๊ณผ์ ์ธ ์ž์งˆ์ž„์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค.Chapter 1 Introduction 1 Chapter 2 Related Works 4 2.1 Sentiment Classification 4 2.2 Feature Weighting in Vector Space Model 5 2.3 Feature Extraction and Selection 7 2.4 Classifiers 10 2.5 Accuracy Measures 14 Chapter 3 Feature Reweighting 16 3.1 Feature extraction in Korean 16 3.2 Feature Reweighting Methods 17 3.3 Examples of Feature Reweighting Methods 18 Chapter 4 Sentiment Polarity Classification System 21 4.1 Model Generation 21 4.2 Sentiment Polarity Classification 23 Chapter 5 Data Preparation 25 5.1 Korean Sentiment Corpus 25 5.2 Korean Sentiment Lexicon 27 Chapter 6 Experiments 29 6.1 Experimental Environment 29 6.2 Experimental Results 30 Chapter 7 Conclusions and Future Works 38 Bibliography 40 Acknowledgments 4

    TIR over Egyptian Hieroglyphs

    Get PDF
    We would like to thank Dr. Josep Cervello Autuori, Director of the Institut dโ€™Estudis del Proxim Orient Antic (IEPOA) of the Universitat Autonoma de Barcelona, for introducing us to Egyptian; and Dr. Serge Rosmorduc, Associate of the Conservatoire National des Arts et Metiers (CNAM), for his support with JSESH[Abstract] This work presents an Information Retrieval system specifically designed to manage Ancient Egyptian hieroglyphic texts taking into account their peculiarities both at lexical and at encoding level for its application in Egyptology and Digital Heritage. The tool has been made freely available to the research community under a free license and, to the best of our knowledge, it is the first tool of its kindMinisterio de Economรญa y Competitividad; FFI2014-51978-C2-2-

    Searching Spontaneous Conversational Speech:Proceedings of ACM SIGIR Workshop (SSCS2008)

    Get PDF
    • โ€ฆ
    corecore