22 research outputs found

    Effect of Tuned Parameters on a LSA MCQ Answering Model

    Full text link
    This paper presents the current state of a work in progress, whose objective is to better understand the effects of factors that significantly influence the performance of Latent Semantic Analysis (LSA). A difficult task, which consists in answering (French) biology Multiple Choice Questions, is used to test the semantic properties of the truncated singular space and to study the relative influence of main parameters. A dedicated software has been designed to fine tune the LSA semantic space for the Multiple Choice Questions task. With optimal parameters, the performances of our simple model are quite surprisingly equal or superior to those of 7th and 8th grades students. This indicates that semantic spaces were quite good despite their low dimensions and the small sizes of training data sets. Besides, we present an original entropy global weighting of answers' terms of each question of the Multiple Choice Questions which was necessary to achieve the model's success.Comment: 9 page

    Probabilistic Latent Semantic Analyses (PLSA) in Bibliometric Analysis for Technology Forecasting

    Get PDF
    Due to the availability of internet-based abstract services and patent databases, bibliometric analysis has become one of key technology forecasting approaches. Recently, latent semantic analysis (LSA) has been applied to improve the accuracy in document clustering. In this paper, a new LSA method, probabilistic latent semantic analysis (PLSA) which uses probabilistic methods and algebra to search latent space in the corpus is further applied in document clustering. The results show that PLSA is more accurate than LSA and the improved iteration method proposed by authors can simplify the computing process and improve the computing efficiencyDebido a la disponibilidad de servicios abstractos de internet y bases de datos de patentes, un análisis bibliométrico se ha transformado en una aproximación clave de sondeo de tecnologías. Recientemente, el Análisis Semántico Latente (LSA) ha sido aplicado para mejorar la precisión en el clustering de documentos. En el siguiente trabajo se muestra, un nuevo método LSA, el Análisis Semántico Probabilística Latente (PLSA), que utiliza métodos probabilísticas y álgebra para buscar espacio latente en el cuerpo generado por el clustering de documentos. Los resultados demuestran que PLSA es más preciso que LSA y mejora el método de iteración propuesto por autores que simplifican los procesos de computación y mejoran la eficiencia de cómputo.Due to the availability of internet-based abstract services and patent databases, bibliometric analysis has become one of key technology forecasting approaches. Recently, latent semantic analysis (LSA) has been applied to improve the accuracy in document clustering. In this paper, a new LSA method, probabilistic latent semantic analysis (PLSA) which uses probabilistic methods and algebra to search latent space in the corpus is further applied in document clustering. The results show that PLSA is more accurate than LSA and the improved iteration method proposed by authors can simplify the computing process and improve the computing efficienc

    Concept learning and information inferencing on a high-dimensional semantic space

    Get PDF
    How to automatically capture a significant portion of relevant background knowledge and keep it up-to-date has been a challenging problem encountered in current research on logic based information retrieval. This paper addresses this problem by investigating various information inference mechanisms based on a high dimensional semantic space constructed from a text corpus using the Hyperspace Analogue to Language (HAL) model. Additionally, the Singular Value Decomposition (SVD) algorithm is considered as an alternative way to enhance the quality of the HAL matrix as well as a mechanism of infering implicit associations. The different characteristics of these inference mechanisms are demonstrated using examples from the Reuters-21578 collection. Our hope is that the techniques discussed in this paper provide a basis for logic based IR to progress to large scale applications

    ENHANCING LITERATURE REVIEW METHODS - TOWARDS MORE EFFICIENT LITERATURE RESEARCH WITH LATENT SEMANTIC INDEXING

    Get PDF
    Nowadays, the facilitated access to increasing amounts of information and scientific resources means that more and more effort is required to conduct comprehensive literature reviews. Literature search, as a fundamental, complex, and time-consuming step in every literature research process, is part of many established scientific methods. However, it is still predominantly supported by search techniqus based on conventional term-matching methods. We address the lack of semantic approaches in this context by proposing an enhancement of established literature review methods. For this purpose, we followed design science research (DSR) principles in order to develop artifacts and implement a prototype of our Tool for Semantic Indexing and Similarity Quries (TSISQ) based on the core concepts of latent semantic indexing (LSI). Its applicability is demonstrated and evaluated in a case study. Results indicate that the presented approach can help save valuable time in finding basic literature in a desired research field or increasing the comprehensiveness of a review by efficiently identifying sources that otherwise would not have been taken into account. The target audience for our findings includes researchers who need to efficiently gain an overview of a specific research field, deepen their knowledge or refine the theoretical foundations of their research

    A MapReduce Based Distributed LSI for Scalable Information Retrieval

    Get PDF
    Latent Semantic Indexing (LSI) has been widely used in information retrieval due to its efficiency in solving the problems of polysemy and synonymy. However, LSI is notably a computationally intensive process because of the computing complexities of singular value decomposition and filtering operations involved in the process. This paper presents MR-LSI, a MapReduce based distributed LSI algorithm for scalable information retrieval. The performance of MR-LSI is first evaluated in a small scale experimental cluster environment, and subsequently evaluated in large scale simulation environments. By partitioning the dataset into smaller subsets and optimizing the partitioned subsets across a cluster of computing nodes, the overhead of the MR-LSI algorithm is reduced significantly while maintaining a high level of accuracy in retrieving documents of user interest. A genetic algorithm based load balancing scheme is designed to optimize the performance of MR-LSI in heterogeneous computing environments in which the computing nodes have varied resources

    INFORMATION RETRIEVAL USING LATENT SEMANTIC INDEXING

    Get PDF
    Our capabilities for collecting and storing data of all kinds are greater then ever. On the other side analyzing, summarizing and extracting information from this data is harder than ever. That’s why there is a growing need for the fast and efficient algorithms for information retrieval.In this paper we present some mathematical models based on linear algebra used to extract the relevant documents for some subjects out of a large set of text document. This is a typical problem of a search engine on the World Wide Web. We use vector space model, which is based on literal matching of terms in the documents and the queries. The vector space model is implemented by creating the term-document matrix. Literal matching of terms does not necessarily retrieve all relevant documents. Synonymy (multiple words having the same meaning) and polysemy (words having multiple meaning) are two major obstacles for efficient information retrieval. Latent Semantic Indexing represents documents by approximations and tends to cluster documents on similar topics even if their term profiles are somewhat different. This approximate representation is accomplished using a low-rank singular value decomposition (SVD) approximation of the term-document matrix. In this paper we compare the precision of information retrieval for different ranks of SVD representation of term-document matrix

    Document clustering using locality preserving indexing

    Full text link

    Adaptive dimension reduction for clustering high dimensional data

    Full text link

    The dynamics of dimensionality reduction for information retrieval: a study of latent semantic indexing using simulated data

    Get PDF
    The effects of dimensionality reduction on information retrieval system performance are studied using Latent Semantic Indexing (LSI) on simulated data. The author hypothesize that LSI improve retrieval by improving the fit between a linear language model and non-linear data such as natural language text. The study analyzes how the values of three variables bear on optimizing k, the system's representational dimensionality. The variables studied are: the correlation between terms, the dimensionality of the untransformed termspace, and the number of documents in the collection. Using multinormally-distributed, stochastic matrices as input, precision/recall and average search length (ASL) are computed for differently modeled retrieval situations. Results indicate that optimizing k relates to the type and degree of correlation found in the original termspace. Findings also suggest that a more nuanced simulation model will permit more robust analysis
    corecore