479 research outputs found

    Agregação de ranks baseada em grafos

    Get PDF
    Orientador: Ricardo da Silva TorresTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Neste trabalho, apresentamos uma abordagem robusta de agregação de listas baseada em grafos, capaz de combinar resultados de modelos de recuperação isolados. O método segue um esquema não supervisionado, que é independente de como as listas isoladas são geradas. Nossa abordagem é capaz de incorporar modelos heterogêneos, de diferentes critérios de recuperação, tal como baseados em conteúdo textual, de imagem ou híbridos. Reformulamos o problema de recuperação ad-hoc como uma recuperação baseada em fusion graphs, que propomos como um novo modelo de representação unificada capaz de mesclar várias listas e expressar automaticamente inter-relações de resultados de recuperação. Assim, mostramos que o sistema de recuperação se beneficia do aprendizado da estrutura intrínseca das coleções, levando a melhores resultados de busca. Nossa formulação de agregação baseada em grafos, diferentemente das abordagens existentes, permite encapsular informação contextual oriunda de múltiplas listas, que podem ser usadas diretamente para ranqueamento. Experimentos realizados demonstram que o método apresenta alto desempenho, produzindo melhores eficácias que métodos recentes da literatura e promovendo ganhos expressivos sobre os métodos de recuperação fundidos. Outra contribuição é a extensão da proposta de grafo de fusão visando consulta eficiente. Trabalhos anteriores são promissores quanto à eficácia, mas geralmente ignoram questões de eficiência. Propomos uma função inovadora de agregação de consulta, não supervisionada, intrinsecamente multimodal almejando recuperação eficiente e eficaz. Introduzimos os conceitos de projeção e indexação de modelos de representação de agregação de consulta com base em grafos, e a sua aplicação em tarefas de busca. Formulações de projeção são propostas para representações de consulta baseadas em grafos. Introduzimos os fusion vectors, uma representação de fusão tardia de objetos com base em listas, a partir da qual é definido um modelo de recuperação baseado intrinsecamente em agregação. A seguir, apresentamos uma abordagem para consulta rápida baseada nos vetores de fusão, promovendo agregação de consultas eficiente. O método apresentou alta eficácia quanto ao estado da arte, além de trazer uma perspectiva de eficiência pouco abordada. Ganhos consistentes de eficiência são alcançadas em relação aos trabalhos recentes. Também propomos modelos de representação baseados em consulta para problemas gerais de predição. Os conceitos de grafos de fusão e vetores de fusão são estendidos para cenários de predição, nos quais podem ser usados para construir um modelo de estimador para determinar se um objeto de avaliação (ainda que multimodal) se refere a uma classe ou não. Experimentos em tarefas de classificação multimodal, tal como detecção de inundação, mostraram que a solução é altamente eficaz para diferentes cenários de predição que envolvam dados textuais, visuais e multimodais, produzindo resultados melhores que vários métodos recentes. Por fim, investigamos a adoção de abordagens de aprendizagem para ajudar a otimizar a criação de modelos de representação baseados em consultas, a fim de maximizar seus aspectos de capacidade discriminativa e eficiência em tarefas de predição e de buscaAbstract: In this work, we introduce a robust graph-based rank aggregation approach, capable of combining results of isolated ranker models in retrieval tasks. The method follows an unsupervised scheme, which is independent of how the isolated ranks are formulated. Our approach is able to incorporate heterogeneous models, defined in terms of different ranking criteria, such as those based on textual, image, or hybrid content representations. We reformulate the ad-hoc retrieval problem as a graph-based retrieval based on {\em fusion graphs}, which we propose as a new unified representation model capable of merging multiple ranks and expressing inter-relationships of retrieval results automatically. By doing so, we show that the retrieval system can benefit from learning the manifold structure of datasets, thus leading to more effective results. Our graph-based aggregation formulation, unlike existing approaches, allows for encapsulating contextual information encoded from multiple ranks, which can be directly used for ranking. Performed experiments demonstrate that our method reaches top performance, yielding better effectiveness scores than state-of-the-art baseline methods and promoting large gains over the rankers being fused. Another contribution refers to the extension of the fusion graph solution for efficient rank aggregation. Although previous works are promising with respect to effectiveness, they usually overlook efficiency aspects. We propose an innovative rank aggregation function that it is unsupervised, intrinsically multimodal, and targeted for fast retrieval and top effectiveness performance. We introduce the concepts of embedding and indexing graph-based rank-aggregation representation models, and their application for search tasks. Embedding formulations are also proposed for graph-based rank representations. We introduce the concept of {\em fusion vectors}, a late-fusion representation of objects based on ranks, from which an intrinsically rank-aggregation retrieval model is defined. Next, we present an approach for fast retrieval based on fusion vectors, thus promoting an efficient rank aggregation system. Our method presents top effectiveness performance among state-of-the-art related work, while promoting an efficiency perspective not yet covered. Consistent speedups are achieved against the recent baselines in all datasets considered. Derived from the fusion graphs and fusion vectors, we propose rank-based representation models for general prediction problems. The concepts of fusion graphs and fusion vectors are extended to prediction scenarios, where they can be used to build an estimator model to determine whether an input (even multimodal) object refers to a class or not. Performed experiments in the context of multimodal classification tasks, such as flood detection, show that the proposed solution is highly effective for different detection scenarios involving textual, visual, and multimodal features, yielding better detection results than several state-of-the-art methods. Finally, we investigate the adoption of learning approaches to help optimize the creation of rank-based representation models, in order to maximize their discriminative power and efficiency aspects in prediction and search tasksDoutoradoCiência da ComputaçãoDoutor em Ciência da Computaçã

    Supporting face familiarization using perceptual and engineering frameworks

    Get PDF
    The identification of unfamiliar faces is known to be inferior to the recognition of faces with which we are familiar. This can lead to undesirable consequences such as misidentification. However, there is some evidence to suggest that a brief period of familiarisation can dramatically improve our ability to recognise an unfamiliar individual. Chapter 1 outlines the previous research that has aimed to understand the mechanisms of face processing, and to improve the recognition of unfamiliar faces. Three areas that require further investigation are identified and the experimental work reported in the three empirical chapters addresses these issues. Chapter 2 reports five experiments, using photographs of faces as stimuli, which examined whether a short training exposure promoting stimulus comparison can facilitate recognition of unfamiliar faces (c.f. Dwyer & Vladeanu, 2009). The results revealed that, contrary to expectation, any beneficial effects of comparison do not extend to improving discrimination between targets and nonexposed stimuli. The results of Chapter 2 required a return to the mechanisms of perceptual learning thought to underpin the comparison effect. Numerous attempts to unpack this process have relied on experiments that have examined the content, but not the location, of the unique features of a stimulus (e.g., Hall, 2003; Mitchell, Nash, & Hall, 2008; Mundy, Honey, & Dwyer, 2007). Chapter 3 used checkerboards as stimuli, manipulating the placement of the unique feature, as a way of breaking the perfect correlation between content and location and assess their relative contributions to perceptual learning. The findings indicated that discrimination between similar stimuli on the basis of exposure can be explained entirely by learning where to look, with no independent effect of learning about particular stimulus features. 11 Chapter 4 returned to the issue of potential methods to improve recognition, and examined the possibility that training using synthesised faces created from a single view and presented at multiple yaw rotations can aid face recognition (Liu, Chai, Shan, Honma, & Osada, 2009). The findings of three experiments; strengthen the claim that identifying an individual can be improved using multiple synthesised views generated from a single front view of a face, and suggest that this improvement may be affected by the quality of synthesised material. In summary, while the results reported within this thesis indicate that comparison between similar faces does not produce an effective way of supporting the recognition of unfamiliar faces, they do indicate that experience with a face and/or artificial faces may be a practical means of facilitating identification

    Learning from Very Few Samples: A Survey

    Full text link
    Few sample learning (FSL) is significant and challenging in the field of machine learning. The capability of learning and generalizing from very few samples successfully is a noticeable demarcation separating artificial intelligence and human intelligence since humans can readily establish their cognition to novelty from just a single or a handful of examples whereas machine learning algorithms typically entail hundreds or thousands of supervised samples to guarantee generalization ability. Despite the long history dated back to the early 2000s and the widespread attention in recent years with booming deep learning technologies, little surveys or reviews for FSL are available until now. In this context, we extensively review 300+ papers of FSL spanning from the 2000s to 2019 and provide a timely and comprehensive survey for FSL. In this survey, we review the evolution history as well as the current progress on FSL, categorize FSL approaches into the generative model based and discriminative model based kinds in principle, and emphasize particularly on the meta learning based FSL approaches. We also summarize several recently emerging extensional topics of FSL and review the latest advances on these topics. Furthermore, we highlight the important FSL applications covering many research hotspots in computer vision, natural language processing, audio and speech, reinforcement learning and robotic, data analysis, etc. Finally, we conclude the survey with a discussion on promising trends in the hope of providing guidance and insights to follow-up researches.Comment: 30 page

    Design and development of a comprehensive data management platform for cytomics: cytomicsDB

    Get PDF
    In Cytomics environment, scientist has to continuosly deal with a large volume of structured and unstructured data, this condition in particular, makes a challenge the interoperability for any platform developed for cytomics. CytomicsDB approach is an effort for developing a framework which takes care of the standardization of the unstructured data, providing a common data model layer for HTS experiments. This model as well is suitable for the integration with other systems in Cytomics, in special other repositories, which allow the validation of key metadata used in the experiments, thus ensure reliability of the data stored. Other possible solutions for cytomics data management, should take special care in the use of data model standards for enhancing the collaboration and data sharing in the scientific community.BAPE Erasmus Mundus ProgramComputer Systems, Imagery and Medi

    Semantic Routed Network for Distributed Search Engines

    Get PDF
    Searching for textual information has become an important activity on the web. To satisfy the rising demand and user expectations, search systems should be fast, scalable and deliver relevant results. To decide which objects should be retrieved, search systems should compare holistic meanings of queries and text document objects, as perceived by humans. Existing techniques do not enable correct comparison of composite holistic meanings like: "evidences on role of DR2 gene in development of diabetes in Caucasian population", which is composed of multiple elementary meanings: "evidence", "DR2 gene", etc. Thus these techniques can not discern objects that have a common set of keywords but convey different meanings. Hence we need new methods to compare composite meanings for superior search quality. In distributed search engines, for scalability, speed and efficiency, index entries should be systematically distributed across multiple index-server nodes based on the meaning of the objects. Furthermore, queries should be selectively sent to those index nodes which have relevant entries. This requires an overlay Semantic Routed Network which will route messages, based on meaning. This network will consist of fast response networking appliances called semantic routers. These appliances need to: (a) carry out sophisticated meaning comparison computations at high speed; and (b) have the right kind of behavior to automatically organize an optimal index system. This dissertation presents the following artifacts that enable the above requirements: (1) An algebraic theory, a design of a data structure and related techniques to efficiently compare composite meanings. (2) Algorithms and accelerator architectures for high speed meaning comparisons inside semantic routers and index-server nodes. (3) An overlay network to deliver search queries to the index nodes based on meanings. (4) Algorithms to construct a self-organizing, distributed meaning based index system. The proposed techniques can compare composite meanings ~105 times faster than an equivalent software code and existing hardware designs. Whereas, the proposed index organization approach can lead to 33% savings in number of servers and power consumption in a model search engine having 700,000 servers. Therefore, using all these techniques, it is possible to design a Semantic Routed Network which has a potential to improve search results and response time, while saving resources

    The Second Hungarian Workshop on Image Analysis : Budapest, June 7-9, 1988.

    Get PDF

    Learning Algorithm to Automate Fast Author Name Disambiguation

    Get PDF
    RÉSUMÉ : La production scientifique mondiale représente une quantité massive d’enregistrements auxquels on peut accéder via de nombreuses bases de données. En raison de la présence d’enregistrements ambigus, un processus de désambiguïsation efficace dans un délai raisonnable est nécessaire comme étape essentielle pour extraire l’information correcte et générer des statistiques de publication. Cependant, la tâche de désambiguïsation est exhaustive et complexe en raison des bases de données volumineuses et des données manquantes. Actuellement, il n’existe pas de méthode automatique complète capable de produire des résultats satisfaisants pour le processus de désambiguïsation. Auparavant, une application efficace de désambiguïsation d’entité a été développée, qui est un algorithme en cascade supervisé donnant des résultats prometteurs sur de grandes bases de données bibliographiques. Bien que le travail existant produise des résultats de haute qualité dans un délai de traitement raisonnable, il manque un choix efficace de métriques et la structure des classificateurs est déterminée d’une manière heuristique par l’analyse des erreurs de précision et de rappel. De toute évidence, une approche automatisée qui rend l’application flexible et réglable améliorerait directement la convivialité de l’application. Une telle approche permettrait de comprendre l’importance de chaque classification d’attributs dans le processus de désambiguïsation et de sélectionner celles qui sont les plus performantes. Dans cette recherche, nous proposons un algorithme d’apprentissage pour automatiser le processus de désambiguïsation de cette application. Pour atteindre nos objectifs, nous menons trois étapes majeures: premièrement, nous abordons le problème d’évaluation des algorithmes de codage phonétique qui peuvent être utilisés dans le blocking. Six algorithmes de codage phonétique couramment utilisés ont été sélectionnés et des mesures d’évaluation quantitative spécifiques ont été développées afin d’évaluer leurs limites et leurs avantages et de recruter le meilleur. Deuxièmement, nous testons différentes mesures de similarité de chaîne de caractères et nous analysons les avantages et les inconvénients de chaque technique. En d’autres termes, notre deuxième objectif est de construire une méthode de désambiguïsation efficace en comparant plusieurs algorithmes basés sur les edits et les tokens pour améliorer la méthode du blocking. Enfin, en utilisant les méthodes d’agrégation bootstrap (Bagging) et AdaBoost, un algorithme a été développé qui utilise des techniques d’optimisation de particle swarm et d’optimisation de set covers pour concevoir un cadre d’apprentissage qui permet l’ordre automatique des weak classifiers et la détermination de leurs seuils. Des comparaisons de performance ont été effectuées sur des données réelles extraites du Web of Science (WoS) et des bases de données bibliographiques SCOPUS. En résumé, ce travail nous permet de tirer des conclusions sur les qualités et les faiblesses de chaque algorithme phonétique et mesure de similarité dans la perspective de notre application. Nous avons montré que l’algorithme phonétique NYSIIS est un meilleur choix à utiliser dans l’étape de blocking de l’application de désambiguïsation. De plus, l’algorithme de Weighting Table-based surpassait certains des algorithmes de similarité couramment utilisés en terme de efficacité de temps, tout en produisant des résultats satisfaisants. En outre, nous avons proposé une méthode d’apprentissage pour déterminer automatiquement la structure de l’algorithme de désambiguïsation.----------ABSTRACT : The worldwide scientific production represents a massive amount of records which can be accessed via numerous databases. Because of the presence of ambiguous records, a time-efficient disambiguation process is required as an essential step of extracting correct information and generating publication statistics. However, the disambiguation task is exhaustive and complex due to the large volume databases and existing missing data. Currently there is no complete automatic method that is able to produce satisfactory results for the disambiguation process. Previously, an efficient entity disambiguation application was developed that is a supervised cascade algorithm which gives promising results on large bibliographic databases. Although the existing work produces high-quality results within a reasonable processing time, it lacks an efficient choice of metrics and the structure of the classifiers is determined in a heuristic manner by the analysis of precision and recall errors. Clearly, an automated approach that makes the application flexible and adjustable would directly enhance the usability of the application. Such approach would help to understand the importance of each feature classification in the disambiguation process and select the most efficient ones. In this research, we propose a learning algorithm for automating the disambiguation process of this application. In fact, the aim of this work is to help to employ the most appropriate phonetic algorithm and similarity measures as well as introduce a desirable automatic approach instead of a heuristic approach. To achieve our goals, we conduct three major steps: First, we address the problem of evaluating phonetic encoding algorithms that can be used in blocking. Six commonly used phonetic encoding algorithm were selected and specific quantitative evaluation metrics were developed in order to assess their limitations and advantages and recruit the best one. Second, we test different string similarity measures and we analyze the advantages and disadvantages of each technique. In other words, our second goal is to build an efficient disambiguation method by comparing several editand token-based algorithms to improve the blocking method. Finally, using bootstrap aggregating (Bagging) and AdaBoost methods, an algorithm has been developed that employs particle swarm and set cover optimization techniques to design a learning framework that enables automatic ordering of the weak classifiers and determining their thresholds. Performance comparisons were carried out on real data extracted from the web of science (WoS) and the SCOPUS bibliographic databases. In summary, this work allows us to draw conclusions about the qualities and weaknesses of each phonetic algorithm and similarity measure in the perspective of our application. We have shown that the NYSIIS phonetic algorithm is a better choice to use in blocking step of the disambiguation application. In addition, the Weighting Table-based algorithm outperforms some of the commonly used similarity algorithms in terms of time-efficiency, while producing satisfactory results. Moreover, we proposed a learning method to determine the structure of the disambiguation algorithm automatically

    Mathematical Morphology for Quantification in Biological & Medical Image Analysis

    Get PDF
    Mathematical morphology is an established field of image processing first introduced as an application of set and lattice theories. Originally used to characterise particle distributions, mathematical morphology has gone on to be a core tool required for such important analysis methods as skeletonisation and the watershed transform. In this thesis, I introduce a selection of new image analysis techniques based on mathematical morphology. Utilising assumptions of shape, I propose a new approach for the enhancement of vessel-like objects in images: the bowler-hat transform. Built upon morphological operations, this approach is successful at challenges such as junctions and robust against noise. The bowler-hat transform is shown to give better results than competitor methods on challenging data such as retinal/fundus imagery. Building further on morphological operations, I introduce two novel methods for particle and blob detection. The first of which is developed in the context of colocalisation, a standard biological assay, and the second, which is based on Hilbert-Edge Detection And Ranging (HEDAR), with regard to nuclei detection and counting in fluorescent microscopy. These methods are shown to produce accurate and informative results for sub-pixel and supra-pixel object counting in complex and noisy biological scenarios. I propose a new approach for the automated extraction and measurement of object thickness for intricate and complicated vessels, such as brain vascular in medical images. This pipeline depends on two key technologies: semi-automated segmentation by advanced level-set methods and automatic thickness calculation based on morphological operations. This approach is validated and results demonstrating the broad range of challenges posed by these images and the possible limitations of this pipeline are shown. This thesis represents a significant contribution to the field of image processing using mathematical morphology and the methods within are transferable to a range of complex challenges present across biomedical image analysis
    corecore