10 research outputs found

    Learning Reputation in an Authorship Network

    Full text link
    The problem of searching for experts in a given academic field is hugely important in both industry and academia. We study exactly this issue with respect to a database of authors and their publications. The idea is to use Latent Semantic Indexing (LSI) and Latent Dirichlet Allocation (LDA) to perform topic modelling in order to find authors who have worked in a query field. We then construct a coauthorship graph and motivate the use of influence maximisation and a variety of graph centrality measures to obtain a ranked list of experts. The ranked lists are further improved using a Markov Chain-based rank aggregation approach. The complete method is readily scalable to large datasets. To demonstrate the efficacy of the approach we report on an extensive set of computational simulations using the Arnetminer dataset. An improvement in mean average precision is demonstrated over the baseline case of simply using the order of authors found by the topic models

    LS3: Latent Semantic Analysis-based Similarity Search for Process Models

    Get PDF
    Large process model collections in use today contain hundreds or even thousands of conceptual process models. Search functionalities can help in handling such large collections for purposes such as duplicate detection or reuse of models. One popular stream of search functionalities is similarity-based search which utilizes similarity measures for finding similar models in a large collection. Most of these approaches base on an underlying alignment between the activities of the compared process models. Yet, such an alignment seems to be quite difficult to achieve according to the results of the Process Model Matching contests conducted in recent years. Therefore, the Latent Semantic Analysis-based Similarity Search (LS3) technique presented in this article does not rely on such an alignment, but uses a Latent Semantic Analysis-based similarity measure for retrieving similar models. An evaluation with 138 real-life process models shows a strong performance in terms of Precision, Recall, F-Measure, R-Precision and Precision-at-k, thereby outperforming five other techniques for similarity-based search. Additionally, the run time of the LS3 query calculation is significantly faster than any of the other approaches

    Latent Semantic Indexing (LSI) Based Distributed System and Search On Encrypted Data

    Get PDF
    Latent semantic indexing (LSI) was initially introduced to overcome the issues of synonymy and polysemy of the traditional vector space model (VSM). LSI, however, has challenges of its own, mainly scalability. Despite being introduced in 1990, there are few attempts that provide an efficient solution for LSI, most of the literature is focuses on LSI’s applications rather than improving the original algorithm. In this work we analyze the first framework to provide scalable implementation of LSI and report its performance on the distributed environment of RAAD. The possibility of adopting LSI in the field of searching over encrypted data is also investigated. The importance of that field is stemmed from the need for cloud computing as an effective computing paradigm that provides an affordable access to high computational power. Encryption is usually applied to prevent unauthorized access to the data (the host is assumed to be curious), however this limits accessibility to the data given that search over encryption is yet to catch with the latest techniques adopted by the Information Retrieval (IR) community. In this work we propose a system that uses LSI for indexing and free-query text for retrieving. The results show that the available LSI framework does scale on large datasets, however it had some limitations with respect to factors like dictionary size and memory limit. When replicating the exact settings of the baseline on RAAD, it performed relatively slower. This could be resulted by the fact that RAAD uses a distributed file system or because of network latency. The results also show that the proposed system for applying LSI on encrypted data retrieved documents in the same order as the baseline (unencrypted data)

    Inferring Degree Of Localization Of Twitter Persons And Topics Through Time, Language, And Location Features

    Get PDF
    Identifying authoritative influencers related to a geographic area (geo-influencers) can aid content recommendation systems and local expert finding. This thesis addresses this important problem using Twitter data. A geo-influencer is identified via the locations of its followers. On Twitter, due to privacy reasons, the location reported by followers is limited to profile via a textual string or messages with coordinates. However, this textual string is often not possible to geocode and less than 1\% of message traffic provides coordinates. First, the error rates associated with Google\u27s geocoder are studied and a classifier is built that gives a warning for self-reported locations that are likely incorrect. Second, it is shown that city-level geo-influencers can be identified without geocoding by leveraging the power of Google search and follower-followee network structure. Third, we illustrate that the global vs. local influencer, at the timezone level, can be identified using a classifier using the temporal features of the followers. For global influencers, spatiotemporal analysis helps understand the evolution of their popularity over time. When applied over message traffic, the approach can differentiate top trending topics and persons in different geographical regions. Fourth, we constrain a timezone to a set of possible countries and use language features for training a high-level geocoder to further localize an influencer\u27s geographic area. Finally, we provide a repository of geo-influencers for applications related to content recommendation. The repository can be used for filtering influencers based on their audience\u27s demographics related to location, time, language, gender, and ethnicity

    Differentially Private Linear Algebra in the Streaming Model

    Get PDF
    The focus of this paper is a systematic study of differential privacy on streaming data using sketch-based algorithms. Previous works, like Dwork {\it et al.} (ICS 2010, STOC 2010), explored random sampling based streaming algorithms. We work in the well studied streaming model of computation, where the database is stored in the form of a matrix and a curator can access the database row-wise or column-wise. Dwork {\it et al.} (STOC 2010) gave impossibility result for any non-trivial query on a streamed data with respect to the user level privacy. Therefore, in this paper, we work with the event level privacy. {We provide optimal, up to logarithmic factor, space data-structure in the streaming model for three basic linear algebraic tasks in a differentially private manner: matrix multiplication, linear regression, and low rank approximation, while incurring significantly less additive error}. The mechanisms for matrix multiplication and linear regression can be seen as the private analogues of known non-private algorithms, and have some similarities with Blocki {\it et al.} (FOCS 2012) and Upadhyay (ASIACRYPT 2013) on the superficial level, but there are some subtle differences. For example, they perform an affine transformation to convert the private matrix in to a set of {w/n,1}n\{\sqrt{w/n},1\}^n vectors for some appropriate ww, while we perform a perturbation that raises the singular values of the private matrix. In order to get a streaming algorithm for low rank approximation, we have to reuse the random Gaussian matrix in a specific way. We prove that the resulting distribution also preserve differential privacy. We do not make any assumptions, like singular value separation, as made in the earlier works of Hardt and Roth (STOC 2013) and Kapralov and Talwar (SODA 2013). Further, we do not assume normalized row as in the work of Dwork {\it et al.} (STOC 2014). All our mechanisms, in the form presented, can also be computed in the distributed setting of Biemel, Nissim, and Omri (CRYPTO 2008)

    A Machine Learning Approach for Plagiarism Detection

    Get PDF
    Plagiarism detection is gaining increasing importance due to requirements for integrity in education. The existing research has investigated the problem of plagrarim detection with a varying degree of success. The literature revealed that there are two main methods for detecting plagiarism, namely extrinsic and intrinsic. This thesis has developed two novel approaches to address both of these methods. Firstly a novel extrinsic method for detecting plagiarism is proposed. The method is based on four well-known techniques namely Bag of Words (BOW), Latent Semantic Analysis (LSA), Stylometry and Support Vector Machines (SVM). The LSA application was fine-tuned to take in the stylometric features (most common words) in order to characterise the document authorship as described in chapter 4. The results revealed that LSA based stylometry has outperformed the traditional LSA application. Support vector machine based algorithms were used to perform the classification procedure in order to predict which author has written a particular book being tested. The proposed method has successfully addressed the limitations of semantic characteristics and identified the document source by assigning the book being tested to the right author in most cases. Secondly, the intrinsic detection method has relied on the use of the statistical properties of the most common words. LSA was applied in this method to a group of most common words (MCWs) to extract their usage patterns based on the transitivity property of LSA. The feature sets of the intrinsic model were based on the frequency of the most common words, their relative frequencies in series, and the deviation of these frequencies across all books for a particular author. The Intrinsic method aims to generate a model of author “style” by revealing a set of certain features of authorship. The model’s generation procedure focuses on just one author as an attempt to summarise aspects of an author’s style in a definitive and clear-cut manner. The thesis has also proposed a novel experimental methodology for testing the performance of both extrinsic and intrinsic methods for plagiarism detection. This methodology relies upon the CEN (Corpus of English Novels) training dataset, but divides that dataset up into training and test datasets in a novel manner. Both approaches have been evaluated using the well-known leave-one-out-cross-validation method. Results indicated that by integrating deep analysis (LSA) and Stylometric analysis, hidden changes can be identified whether or not a reference collection exists

    Exploração de literatura biomédica usando semântica latente

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaO rápido crescimento de dados disponível na Internet e o facto de se encontrar maioritariamente na forma de texto não estruturado, tem criado sucessivos desafios na recuperação e indexação desta informação. Para além da Internet, também inúmeras bases de dados documentais, de áreas específicas do conhecimento, são confrontadas com este problema. Com a quantidade de informação a crescer tão rapidamente, os métodos tradicionais para indexar e recuperar informação, tornam-se insuficientes face a requisitos cada vez mais exigentes por parte dos utilizadores. Estes problemas levam à necessidade de melhorar os sistemas de recuperação de informação, usando técnicas mais poderosas e eficientes. Um desses métodos designa-se por Latent Semantic Indexing (LSI) e, tem sido sugerido como uma boa solução para modelar e analisar texto não estruturado. O LSI permite revelar a estrutura semântica de um corpus, descobrindo relações entre documentos e termos, mostrando-se uma solução robusta para o melhoramento de sistemas de recuperação de informação, especialmente a identificação de documentos relevantes para a pesquisa de um utilizador. Além disso, o LSI pode ser útil em outras tarefas tais como indexação de documentos e anotação de termos. O principal objectivo deste projeto consistiu no estudo e exploração do LSI na anotação de termos e na estruturação dos resultados de um sistema de recuperação de informação. São apresentados resultados de desempenho destes algoritmos e são igualmente propostas algumas formas para visualizar estes resultados.The rapid increase in the amount of data available on the Internet, and the fact that this is mostly in the form of unstructured text, has brought successive challenges in information indexing and retrieval. Besides the Internet, specific literature databases are also faced with these problems. With the amount of information growing so rapidly, traditional methods for indexing and retrieving information become insufficient for the increasingly stringent requirements from users. These issues lead to the need of improving information retrieval systems using more powerful and efficient techniques. One of those methods is the Latent Semantic Indexing (LSI), which has been suggested as a good solution for modeling and analyzing unstructured text. LSI allows discovering the semantic structure in a corpus, by finding the relations between documents and terms. It is a robust solution for improving information retrieval systems, especially in the identification of relevant documents for a user's query. Besides this, LSI can be useful in other tasks such as document indexing and annotation of terms. The main goal of this project consisted in studying and exploring the LSI process for terms annotations and for structuring the retrieved documents from an information retrieval system. The performance results of these algorithms are presented and, in addition, several new forms of visualizing these results are proposed

    Towards Collaborative Session-based Semantic Search

    Get PDF
    In recent years, the most popular web search engines have excelled in their ability to answer short queries that require clear, localized and personalized answers. When it comes to complex exploratory search tasks however, the main challenge for the searcher remains the same as back in the 1990s: Trying to formulate a single query that contains all the right keywords to produce at least some relevant results. In this work we want to investigate new ways to facilitate exploratory search by making use of context information from the user's entire search process. Therefore we present the concept of session-based semantic search, with an optional extension to collaborative search scenarios. To improve the relevance of search results we expand queries with terms from the user's recent query history in the same search context (session-based search). We introduce a novel method for query classification based on statistical topic models which allows us to track the most important topics in a search session so that we can suggest relevant documents that could not be found through keyword matching. To demonstrate the potential of these concepts, we have built the prototype of a session-based semantic search engine which we release as free and open source software. In a qualitative user study that we have conducted, this prototype has shown promising results and was well-received by the participants.:1. Introduction 2. Related Work 2.1. Topic Models 2.1.1. Common Traits 2.1.2. Topic Modeling Techniques 2.1.3. Topic Labeling 2.1.4. Topic Graph Visualization 2.2. Session-based Search 2.3. Query Classification 2.4. Collaborative Search 2.4.1. Aspects of Collaborative Search Systems 2.4.2. Collaborative Information Retrieval Systems 3. Core Concepts 3.1. Session-based Search 3.1.1. Session Data 3.1.2. Query Aggregation 3.2. Topic Centroid 3.2.1. Topic Identification 3.2.2. Topic Shift 3.2.3. Relevance Feedback 3.2.4. Topic Graph Visualization 3.3. Search Strategy 3.3.1. Prerequisites 3.3.2. Search Algorithms 3.3.3. Query Pipeline 3.4. Collaborative Search 3.4.1. Shared Topic Centroid 3.4.2. Group Management 3.4.3. Collaboration 3.5. Discussion 4. Prototype 4.1. Document Collection 4.1.1. Selection Criteria 4.1.2. Data Preparation 4.1.3. Search Index 4.2. Search Engine 4.2.1. Search Algorithms 4.2.2. Query Pipeline 4.2.3. Session Persistence 4.3. User Interface 4.4. Performance Review 4.5. Discussion 5. User Study 5.1. Methods 5.1.1. Procedure 5.1.2. Implementation 5.1.3. Tasks 5.1.4. Questionnaires 5.2. Results 5.2.1. Participants 5.2.2. Task Review 5.2.3. Literature Research Results 5.3. Discussion 6. Conclusion Bibliography Weblinks A. Appendix A.1. Prototype: Source Code A.2. Survey A.2.1. Tasks A.2.2. Document Filter for Google Scholar A.2.3. Questionnaires A.2.4. Participant’s Answers A.2.5. Participant’s Search ResultsDie führenden Web-Suchmaschinen haben sich in den letzten Jahren gegenseitig darin übertroffen, möglichst leicht verständliche, lokalisierte und personalisierte Antworten auf kurze Suchanfragen anzubieten. Bei komplexen explorativen Rechercheaufgaben hingegen ist die größte Herausforderung für den Nutzer immer noch die gleiche wie in den 1990er Jahren: Eine einzige Suchanfrage so zu formulieren, dass alle notwendigen Schlüsselwörter enthalten sind, um zumindest ein paar relevante Ergebnisse zu erhalten. In der vorliegenden Arbeit sollen neue Methoden entwickelt werden, um die explorative Suche zu erleichtern, indem Kontextinformationen aus dem gesamten Suchprozess des Nutzers einbezogen werden. Daher stellen wir das Konzept der sitzungsbasierten semantischen Suche vor, mit einer optionalen Erweiterung auf kollaborative Suchszenarien. Um die Relevanz von Suchergebnissen zu steigern, werden Suchanfragen mit Begriffen aus den letzten Anfragen des Nutzers angereichert, die im selben Suchkontext gestellt wurden (sitzungsbasierte Suche). Außerdem wird ein neuartiger Ansatz zur Klassifizierung von Suchanfragen eingeführt, der auf statistischen Themenmodellen basiert und es uns ermöglicht, die wichtigsten Themen in einer Suchsitzung zu erkennen, um damit weitere relevante Dokumente vorzuschlagen, die nicht durch Keyword-Matching gefunden werden konnten. Um das Potential dieser Konzepte zu demonstrieren, wurde im Rahmen dieser Arbeit der Prototyp einer sitzungsbasierten semantischen Suchmaschine entwickelt, den wir als freie Software veröffentlichen. In einer qualitativen Nutzerstudie hat dieser Prototyp vielversprechende Ergebnisse hervorgebracht und wurde von den Teilnehmern positiv aufgenommen.:1. Introduction 2. Related Work 2.1. Topic Models 2.1.1. Common Traits 2.1.2. Topic Modeling Techniques 2.1.3. Topic Labeling 2.1.4. Topic Graph Visualization 2.2. Session-based Search 2.3. Query Classification 2.4. Collaborative Search 2.4.1. Aspects of Collaborative Search Systems 2.4.2. Collaborative Information Retrieval Systems 3. Core Concepts 3.1. Session-based Search 3.1.1. Session Data 3.1.2. Query Aggregation 3.2. Topic Centroid 3.2.1. Topic Identification 3.2.2. Topic Shift 3.2.3. Relevance Feedback 3.2.4. Topic Graph Visualization 3.3. Search Strategy 3.3.1. Prerequisites 3.3.2. Search Algorithms 3.3.3. Query Pipeline 3.4. Collaborative Search 3.4.1. Shared Topic Centroid 3.4.2. Group Management 3.4.3. Collaboration 3.5. Discussion 4. Prototype 4.1. Document Collection 4.1.1. Selection Criteria 4.1.2. Data Preparation 4.1.3. Search Index 4.2. Search Engine 4.2.1. Search Algorithms 4.2.2. Query Pipeline 4.2.3. Session Persistence 4.3. User Interface 4.4. Performance Review 4.5. Discussion 5. User Study 5.1. Methods 5.1.1. Procedure 5.1.2. Implementation 5.1.3. Tasks 5.1.4. Questionnaires 5.2. Results 5.2.1. Participants 5.2.2. Task Review 5.2.3. Literature Research Results 5.3. Discussion 6. Conclusion Bibliography Weblinks A. Appendix A.1. Prototype: Source Code A.2. Survey A.2.1. Tasks A.2.2. Document Filter for Google Scholar A.2.3. Questionnaires A.2.4. Participant’s Answers A.2.5. Participant’s Search Result
    corecore