6 research outputs found

    The 3A Interaction Model and Relation-Based Recommender System:Adopting Social Media Paradigms in Designing Personal Learning Environments

    Get PDF
    We live in a rapidly changing digital world marked by technological advances, and fraught with online information constantly growing thanks to the Internet revolution and the online social applications in particular. Formal learning acquired in traditional academic and professional environments is not by itself sufficient to keep up with our information-based society. Instead, more and more focus is granted to lifelong, self-directed, and self-paced learning, acquired intentionally or spontaneously, in environments that are not purposely dedicated for learning. The concept of online Personal Learning Environments (PLEs) refers to the development of platforms that are able to sustain lifelong learning. PLEs require new design paradigms giving learners the opportunity to conduct autonomous activities depending on their interests, and allowing them to appropriate, repurpose and contribute to online content rather than merely consume pre-packaged learning resources. This thesis presents the 3A interaction model, a flexible design model targeting online personal and collaborative environments in general, and PLEs in particular. The model adopts bottom-up social media paradigms in combining social networking with flexible content and activity management features. The proposed model targets both formal and informal interactions where learning in not necessarily an explicit aim but may be a byproduct. It identifies 3 main constructs, namely actors, activities, and assets that can represent interaction and learning contexts in a flexible way. The applicability of the 3A interaction model to design actual PLEs and to deploy them in different learning modalities is demonstrated through usability studies and use-case scenarios. This thesis also addresses the challenge of dealing with information overload and helping end-users find relatively interesting information in open environments such as PLEs where content is not predefined, but is rather constantly added at run time, and differ in subject matter, quality, as well as intended audience. For that purpose, the 3A personalized, contextual, and relation-based recommender system is proposed, and evaluated on two real datasets

    DBFIRE: recuperação de documentos relacionados a consultas a banco de dados.

    Get PDF
    Bancos de dados e documentos são comumente mantidos em separado nas organizações, controlados por Sistemas Gerenciadores de Bancos de Dados (SGBDs) e Sistemas de Recuperação de Informação (SRIs), respectivamente. Essa separação tem ligação com a natureza dos dados manipulados: estruturados, no primeiro caso; não estruturados, no segundo. Enquanto os SGBDs processam consultas exatas a bancos de dados, os SRIs recuperam documentos com base em buscas por palavras-chave, que são inerentemente imprecisas. Apesar disso, a integração desses sistemas pode resultar em grandes ganhos ao usuário, uma vez que, numa mesma organização, bancos de dados e documentos frequentemente se referem a entidades comuns. Uma das possibilidades de integração é a recuperação de documentos associados a uma dada consulta a banco de dados. Por exemplo, considerando a consulta "Quais os clientes com contratos acima de X reais?", como recuperar documentos que possam estar associados a esta consulta, como os próprios contratos desses clientes, propostas de novas vendas em aberto, entre outros documentos? A solução proposta nesta tese baseia-se numa abordagem especial de expansão de busca para a recuperação de documentos: um conjunto inicial de palavras-chave é expandido com termos potencialmente úteis contidos no resultado de uma consulta a banco de dados; o conjunto de palavras-chave resultante é então enviado a um SRI para a recuperação dos documentos de interesse para a consulta. Propõe-se ainda uma nova forma de ordenação dos termos para expansão: partindo do pressuposto de que uma consulta a banco de dados representa com exatidão a necessidade de informação do usuário, a seleção dos termos é medida por sua difusão ao longo do resultado da consulta. Essa medida é usada não apenas para selecionar os melhores termos, mas também para estabelecer seus pesos relativos na expansão. Para validar o método proposto, foram realizados experimentos em dois domínios distintos, com resultados evidenciando melhorias significativas em termos da recuperação de documentos relacionados às consultas na comparação com outros modelos destacados na literatura

    Machine Learning for Information Retrieval

    Get PDF
    In this thesis, we explore the use of machine learning techniques for information retrieval. More specifically, we focus on ad-hoc retrieval, which is concerned with searching large corpora to identify the documents relevant to user queries. Thisidentification is performed through a ranking task. Given a user query, an ad-hoc retrieval system ranks the corpus documents, so that the documents relevant to the query ideally appear above the others. In a machine learning framework, we are interested in proposing learning algorithms that can benefit from limited training data in order to identify a ranker likely to achieve high retrieval performance over unseen documents and queries. This problem presents novel challenges compared to traditional learning tasks, such as regression or classification. First, our task is a ranking problem, which means that the loss for a given query cannot be measured as a sum of an individual loss suffered for each corpus document. Second, most retrieval queries present a highly unbalanced setup, with a set of relevant documents accounting only for a very small fraction of the corpus. Third, ad-hoc retrieval corresponds to a kind of ``double'' generalization problem, since the learned model should not only generalize to new documents but also to new queries. Finally, our task also presents challenging efficiency constraints, since ad-hoc retrieval is typically applied to large corpora. % The main objective of this thesis is to investigate the discriminative learning of ad-hoc retrieval models. For that purpose, we propose different models based on kernel machines or neural networks adapted to different retrieval contexts. The proposed approaches rely on different online learning algorithms that allow efficient learning over large corpora. The first part of the thesis focus on text retrieval. In this case, we adopt a classical approach to the retrieval ranking problem, and order the text documents according to their estimated similarity to the text query. The assessment of semantic similarity between text items plays a key role in that setup and we propose a learning approach to identify an effective measure of text similarity. This identification is not performed relying on a set of queries with their corresponding relevant document sets, since such data are especially expensive to label and hence rare. Instead, we propose to rely on hyperlink data, since hyperlinks convey semantic proximity information that is relevant to similarity learning. This setup is hence a transfer learning setup, where we benefit from the proximity information encoded by hyperlinks to improve the performance over the ad-hoc retrieval task. We then investigate another retrieval problem, i.e. the retrieval of images from text queries. Our approach introduces a learning procedure optimizing a criterion related to the ranking performance. This criterion adapts our previous learning objective for learning textual similarity to the image retrieval problem. This yields an image ranking model that addresses the retrieval problem directly. This approach contrasts with previous research that rely on an intermediate image annotation task. Moreover, our learning procedure builds upon recent work on the online learning of kernel-based classifiers. This yields an efficient, scalable algorithm, which can benefit from recent kernels developed for image comparison. In the last part of the thesis, we show that the objective function used in the previous retrieval problems can be applied to the task of keyword spotting, i.e. the detection of given keywords in speech utterances. For that purpose, we formalize this problem as a ranking task: given a keyword, the keyword spotter should order the utterances so that the utterances containing the keyword appear above the others. Interestingly, this formulation yields an objective directly maximizing the area under the receiver operating curve, the most common keyword spotter evaluation measure. This objective is then used to train a model adapted to this intrinsically sequential problem. This model is then learned with a procedure derived from the algorithm previously introduced for the image retrieval task. To conclude, this thesis introduces machine learning approaches for ad-hoc retrieval. We propose learning models for various multi-modal retrieval setups, i.e. the retrieval of text documents from text queries, the retrieval of images from text queries and the retrieval of speech recordings from written keywords. Our approaches rely on discriminative learning and enjoy efficient training procedures, which yields effective and scalable models. In all cases, links with prior approaches were investigated and experimental comparisons were conducted

    Detection and management of redundancy for information retrieval

    Get PDF
    The growth of the web, authoring software, and electronic publishing has led to the emergence of a new type of document collection that is decentralised, amorphous, dynamic, and anarchic. In such collections, redundancy is a significant issue. Documents can spread and propagate across such collections without any control or moderation. Redundancy can interfere with the information retrieval process, leading to decreased user amenity in accessing information from these collections, and thus must be effectively managed. The precise definition of redundancy varies with the application. We restrict ourselves to documents that are co-derivative: those that share a common heritage, and hence contain passages of common text. We explore document fingerprinting, a well-known technique for the detection of co-derivative document pairs. Our new lossless fingerprinting algorithm improves the effectiveness of a range of document fingerprinting approaches. We empirically show that our algorithm can be highly effective at discovering co-derivative document pairs in large collections. We study the occurrence and management of redundancy in a range of application domains. On the web, we find that document fingerprinting is able to identify widespread redundancy, and that this redundancy has a significant detrimental effect on the quality of search results. Based on user studies, we suggest that redundancy is most appropriately managed as a postprocessing step on the ranked list and explain how and why this should be done. In the genomic area of sequence homology search, we explain why the existing techniques for redundancy discovery are increasingly inefficient, and present a critique of the current approaches to redundancy management. We show how document fingerprinting with a modified version of our algorithm provides significant efficiency improvements, and propose a new approach to redundancy management based on wildcards. We demonstrate that our scheme provides the benefits of existing techniques but does not have their deficiencies. Redundancy in distributed information retrieval systems - where different parts of the collection are searched by autonomous servers - cannot be effectively managed using traditional fingerprinting techniques. We thus propose a new data structure, the grainy hash vector, for redundancy detection and management in this environment. We show in preliminary tests that the grainy hash vector is able to accurately detect a good proportion of redundant document pairs while maintaining low resource usage
    corecore