5 research outputs found

    Scientific Digital Data Repositories: Needs and Challenges for Cancer Researchers

    Get PDF
    The purpose of this study is to understand the varied data needs of molecular level cancer researchers who use light, fluorescent, and electron microscopy to obtain knowledge about cancer on a molecular level. It explores what data tools a sample of researchers are currently using to preserve their data for future access, and the needs of these researchers for depositing their digital research data into digital repositories. Data from the researchers suggest that they understand the need to preserve their raw and compiled data in places outside their laboratory, but they have not fully embraced the idea of depositing it in a repository. This seems most likely due to them not fully understanding what repositories are and what they provide. To increase the use of repositories by this research community, repositories need to promote themselves better and to offer additional services that are specific for the needs of this community.Master of Science in Library Scienc

    O uso da Divergência de Kullback-Leibler e da Divergência Generalizada como medida de similaridade em sistemas CBIR

    Get PDF
    The content-based image retrieval is important for various purposes like disease diagnoses from computerized tomography, for example. The relevance, social and economic of image retrieval systems has created the necessity of its improvement. Within this context, the content-based image retrieval systems are composed of two stages, the feature extraction and similarity measurement. The stage of similarity is still a challenge due to the wide variety of similarity measurement functions, which can be combined with the different techniques present in the recovery process and return results that aren’t always the most satisfactory. The most common functions used to measure the similarity are the Euclidean and Cosine, but some researchers have noted some limitations in these functions conventional proximity, in the step of search by similarity. For that reason, the Bregman divergences (Kullback Leibler and I-Generalized) have attracted the attention of researchers, due to its flexibility in the similarity analysis. Thus, the aim of this research was to conduct a comparative study over the use of Bregman divergences in relation the Euclidean and Cosine functions, in the step similarity of content-based image retrieval, checking the advantages and disadvantages of each function. For this, it was created a content-based image retrieval system in two stages: offline and online, using approaches BSM, FISM, BoVW and BoVW-SPM. With this system was created three groups of experiments using databases: Caltech101, Oxford and UK-bench. The performance of content-based image retrieval system using the different functions of similarity was tested through of evaluation measures: Mean Average Precision, normalized Discounted Cumulative Gain, precision at k, precision x recall. Finally, this study shows that the use of Bregman divergences (Kullback Leibler and Generalized) obtains better results than the Euclidean and Cosine measures with significant gains for content-based image retrieval.Coordenação de Aperfeiçoamento de Pessoal de Nível SuperiorDissertação (Mestrado)A recuperação de imagem baseada em conteúdo é importante para diversos fins, como diagnósticos de doenças a partir de tomografias computadorizadas, por exemplo. A relevância social e econômica de sistemas de recuperação de imagens criou a necessidade do seu aprimoramento. Dentro deste contexto, os sistemas de recuperação de imagens baseadas em conteúdo são compostos de duas etapas: extração de característica e medida de similaridade. A etapa de similaridade ainda é um desafio, devido à grande variedade de funções de medida de similaridade, que podem ser combinadas com as diferentes técnicas presentes no processo de recuperação e retornar resultados que nem sempre são os mais satisfatórios. As funções geralmente mais usadas para medir a similaridade são as Euclidiana e Cosseno, mas alguns pesquisadores têm notado algumas limitações nestas funções de proximidade convencionais, na etapa de busca por similaridade. Por esse motivo, as divergências de Bregman (Kullback Leibler e Generalizada) têm atraído a atenção dos pesquisadores, devido à sua flexibilidade em análise de similaridade. Desta forma, o objetivo desta pesquisa foi realizar um estudo comparativo sobre a utilização das divergências de Bregman em relação às funções Euclidiana e Cosseno, na etapa de similaridade da recuperação de imagens baseadas em conteúdo, averiguando as vantagens e desvantagens de cada função. Para isso, criou-se um sistema de recuperação de imagens baseado em conteúdo em duas etapas: off-line e on-line, utilizando as abordagens BSM, FISM, BoVW e BoVW-SPM. Com esse sistema, foram realizados três grupos de experimentos utilizando os bancos de dados: Caltech101, Oxford e UK-bench. O desempenho do sistema de recuperação de imagem baseada em conteúdo utilizando as diferentes funções de similaridade foram testadas por meio das medidas de avaliação: Mean Average Precision, normalized Discounted Cumulative Gain, precisão em k, e precisão x revocação. Por fim, o presente estudo aponta que o uso das divergências de Bregman (Kullback Leibler e Generalizada) obtiveram melhores resultados do que as medidas Euclidiana e Cosseno, com ganhos relevantes para recuperação de imagem baseada em conteúdo

    Évaluation de la qualité des documents anciens numérisés

    Get PDF
    Les travaux de recherche présentés dans ce manuscrit décrivent plusieurs apports au thème de l évaluation de la qualité d images de documents numérisés. Pour cela nous proposons de nouveaux descripteurs permettant de quantifier les dégradations les plus couramment rencontrées sur les images de documents numérisés. Nous proposons également une méthodologie s appuyant sur le calcul de ces descripteurs et permettant de prédire les performances d algorithmes de traitement et d analyse d images de documents. Les descripteurs sont définis en analysant l influence des dégradations sur les performances de différents algorithmes, puis utilisés pour créer des modèles de prédiction à l aide de régresseurs statistiques. La pertinence, des descripteurs proposés et de la méthodologie de prédiction, est validée de plusieurs façons. Premièrement, par la prédiction des performances de onze algorithmes de binarisation. Deuxièmement par la création d un processus automatique de sélection de l algorithme de binarisation le plus performant pour chaque image. Puis pour finir, par la prédiction des performances de deux OCRs en fonction de l importance du défaut de transparence (diffusion de l encre du recto sur le verso d un document). Ce travail sur la prédiction des performances d algorithmes est aussi l occasion d aborder les problèmes scientifiques liés à la création de vérités-terrains et d évaluation de performances.This PhD. thesis deals with quality evaluation of digitized document images. In order to measure the quality of a document image, we propose to create new features dedicated to the characterization of most commons degradations. We also propose to use these features to create prediction models able to predict the performances of different types of document analysis algorithms. The features are defined by analyzing the impact of a specific degradation on the results of an algorithm and then used to create statistical regressors.The relevance of the proposed features and predictions models, is analyzed in several experimentations. The first one aims to predict the performance of different binarization methods. The second experiment aims to create an automatic procedure able to select the best binarization method for each image. At last, the third experiment aims to create a prediction model for two commonly used OCRs. This work on performance prediction algorithms is also an opportunity to discuss the scientific problems of creating ground-truth for performance evaluation.BORDEAUX1-Bib.electronique (335229901) / SudocSudocFranceF
    corecore