6 research outputs found
Autoencoding the Retrieval Relevance of Medical Images
Content-based image retrieval (CBIR) of medical images is a crucial task that
can contribute to a more reliable diagnosis if applied to big data. Recent
advances in feature extraction and classification have enormously improved CBIR
results for digital images. However, considering the increasing accessibility
of big data in medical imaging, we are still in need of reducing both memory
requirements and computational expenses of image retrieval systems. This work
proposes to exclude the features of image blocks that exhibit a low encoding
error when learned by a autoencoder (). We examine the
histogram of autoendcoding errors of image blocks for each image class to
facilitate the decision which image regions, or roughly what percentage of an
image perhaps, shall be declared relevant for the retrieval task. This leads to
reduction of feature dimensionality and speeds up the retrieval process. To
validate the proposed scheme, we employ local binary patterns (LBP) and support
vector machines (SVM) which are both well-established approaches in CBIR
research community. As well, we use IRMA dataset with 14,410 x-ray images as
test data. The results show that the dimensionality of annotated feature
vectors can be reduced by up to 50% resulting in speedups greater than 27% at
expense of less than 1% decrease in the accuracy of retrieval when validating
the precision and recall of the top 20 hits.Comment: To appear in proceedings of The 5th International Conference on Image
Processing Theory, Tools and Applications (IPTA'15), Nov 10-13, 2015,
Orleans, Franc
Combining textual and visual ontologies to solve medical multimodal queries
In order to solve medical multimodal queries, we propose to split the queries in different dimensions using ontology. We extract both textual and visual terms depending on the ontology dimension they belong to. Based on these terms, we build different sub queries each corresponds to one query dimension. Then we use Boolean expressions on these sub queries to filter the entire document collection. The filtered document set is ranked using the techniques in Vector Space Model. We also combine the ranked lists generated using both text and image indexes to further improve the retrieval performance. We have achieved the best overall performance for the Medical Image Retrieval Task in CLEF 2005. These experimental results show that while most queries are better handled by the text query processing as most semantic information are contained in the medical text cases, both textual and visual ontology dimensions are complementary in improving the results during media fusion. 1