258 research outputs found

    A graph-based approach for the retrieval of multi-modality medical images

    Get PDF
    Medical imaging has revolutionised modern medicine and is now an integral aspect of diagnosis and patient monitoring. The development of new imaging devices for a wide variety of clinical cases has spurred an increase in the data volume acquired in hospitals. These large data collections offer opportunities for search-based applications in evidence-based diagnosis, education, and biomedical research. However, conventional search methods that operate upon manual annotations are not feasible for this data volume. Content-based image retrieval (CBIR) is an image search technique that uses automatically derived visual features as search criteria and has demonstrable clinical benefits. However, very few studies have investigated the CBIR of multi-modality medical images, which are making a monumental impact in healthcare, e.g., combined positron emission tomography and computed tomography (PET-CT) for cancer diagnosis. In this thesis, we propose a new graph-based method for the CBIR of multi-modality medical images. We derive a graph representation that emphasises the spatial relationships between modalities by structurally constraining the graph based on image features, e.g., spatial proximity of tumours and organs. We also introduce a graph similarity calculation algorithm that prioritises the relationships between tumours and related organs. To enable effective human interpretation of retrieved multi-modality images, we also present a user interface that displays graph abstractions alongside complex multi-modality images. Our results demonstrated that our method achieved a high precision when retrieving images on the basis of tumour location within organs. The evaluation of our proposed UI design by user surveys revealed that it improved the ability of users to interpret and understand the similarity between retrieved PET-CT images. The work in this thesis advances the state-of-the-art by enabling a novel approach for the retrieval of multi-modality medical images

    Bridging the semantic gap in content-based image retrieval.

    Get PDF
    To manage large image databases, Content-Based Image Retrieval (CBIR) emerged as a new research subject. CBIR involves the development of automated methods to use visual features in searching and retrieving. Unfortunately, the performance of most CBIR systems is inherently constrained by the low-level visual features because they cannot adequately express the user\u27s high-level concepts. This is known as the semantic gap problem. This dissertation introduces a new approach to CBIR that attempts to bridge the semantic gap. Our approach includes four components. The first one learns a multi-modal thesaurus that associates low-level visual profiles with high-level keywords. This is accomplished through image segmentation, feature extraction, and clustering of image regions. The second component uses the thesaurus to annotate images in an unsupervised way. This is accomplished through fuzzy membership functions to label new regions based on their proximity to the profiles in the thesaurus. The third component consists of an efficient and effective method for fusing the retrieval results from the multi-modal features. Our method is based on learning and adapting fuzzy membership functions to the distribution of the features\u27 distances and assigning a degree of worthiness to each feature. The fourth component provides the user with the option to perform hybrid querying and query expansion. This allows the enrichment of a visual query with textual data extracted from the automatically labeled images in the database. The four components are integrated into a complete CBIR system that can run in three different and complementary modes. The first mode allows the user to query using an example image. The second mode allows the user to specify positive and/or negative sample regions that should or should not be included in the retrieved images. The third mode uses a Graphical Text Interface to allow the user to browse the database interactively using a combination of low-level features and high-level concepts. The proposed system and ail of its components and modes are implemented and validated using a large data collection for accuracy, performance, and improvement over traditional CBIR techniques

    Content-Based Image Retrieval Using Self-Organizing Maps

    Full text link

    Cloud-Based Benchmarking of Medical Image Analysis

    Get PDF
    Medical imagin

    Giving eyes to ICT!, or How does a computer recognize a cow?

    Get PDF
    Het door Schouten en andere onderzoekers op het CWI ontwikkelde systeem berust op het beschrijven van beelden met behulp van fractale meetkunde. De menselijke waarneming blijkt mede daardoor zo efficiënt omdat zij sterk werkt met gelijkenissen. Het ligt dus voor de hand het te zoeken in wiskundige methoden die dat ook doen. Schouten heeft daarom beeldcodering met behulp van 'fractals' onderzocht. Fractals zijn zelfgelijkende meetkundige figuren, opgebouwd door herhaalde transformatie (iteratie) van een eenvoudig basispatroon, dat zich daardoor op steeds kleinere schalen vertakt. Op elk niveau van detaillering lijkt een fractal op zichzelf (Droste-effect). Met fractals kan men vrij eenvoudig bedrieglijk echte natuurvoorstellingen maken. Fractale beeldcodering gaat ervan uit dat het omgekeerde ook geldt: een beeld effectief opslaan in de vorm van de basispatronen van een klein aantal fractals, samen met het voorschrift hoe het oorspronkelijke beeld daaruit te reconstrueren. Het op het CWI in samenwerking met onderzoekers uit Leuven ontwikkelde systeem is mede gebaseerd op deze methode. ISBN 906196502

    Visual thesaurus for color image retrieval using SOM.

    Get PDF
    Yip King-Fung.Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.Includes bibliographical references (leaves 84-89).Abstracts in English and Chinese.Abstract --- p.i論文摘要 --- p.iiiTable of Contents --- p.ivList of Abbreviations --- p.viAcknowledgements --- p.viiChapter 1. --- Introduction --- p.1Chapter 1.1. --- Background --- p.1Chapter 1.2. --- Motivation --- p.3Chapter 1.3. --- Thesis Organization --- p.4Chapter 2. --- A Survey of Content-based Image Retrieval --- p.5Chapter 2.1. --- Text-based Image Retrieval --- p.5Chapter 2.2. --- Content-Based Image Retrieval --- p.7Chapter 2.2.1. --- Content-Based Image Retrieval Systems --- p.7Chapter 2.2.2. --- Query Methods --- p.9Chapter 2.2.3. --- Image Features --- p.11Chapter 2.2.4. --- Summary --- p.16Chapter 3. --- Visual Thesaurus using SOM --- p.17Chapter 3.1. --- Algorithm --- p.17Chapter 3.1.1. --- Image Representation --- p.17Chapter 3.1.2. --- Self-Organizing Map --- p.21Chapter 3.2. --- Preliminary Experiment --- p.27Chapter 3.2.1. --- Feature differences --- p.27Chapter 3.2.2. --- Labeling differences --- p.30Chapter 4. --- Experiment --- p.33Chapter 4.1. --- Subjects --- p.33Chapter 4.2. --- Apparatus --- p.33Chapter 4.2.1. --- Systems --- p.33Chapter 4.2.2. --- Test Databases --- p.33Chapter 4.3. --- Procedure --- p.34Chapter 4.3.1. --- Description --- p.35Chapter 4.3.2. --- SOM (text) --- p.36Chapter 4.3.3. --- SOM (image) --- p.38Chapter 4.3.4. --- QBE (text) --- p.40Chapter 4.3.5. --- QBE (image) --- p.42Chapter 4.3.6. --- Questionnaire --- p.44Chapter 4.3.7. --- Experiment Flow --- p.45Chapter 4.4. --- Results --- p.46Chapter 4.5. --- Discussion --- p.51Chapter 5. --- Quantizing Color Histogram --- p.55Chapter 5.1. --- Algorithm --- p.56Chapter 5.1.1. --- Codebook Generation Phrase --- p.57Chapter 5.1.2. --- Histogram Generation Phrase --- p.66Chapter 5.2. --- Experiment --- p.67Chapter 5.2.1. --- Test Database --- p.67Chapter 5.2.2. --- Evaluation Methods --- p.67Chapter 5.2.3. --- Results and Discussion --- p.69Chapter 5.2.4. --- Summary --- p.74Chapter 6. --- Relevance Feedback --- p.75Chapter 6.1. --- Relevance Feedback in Text Information Retrieval --- p.75Chapter 6.2. --- Relevance Feedback in Multimedia Information Retrieval --- p.76Chapter 6.3. --- Relevance Feedback in Visual Thesaurus --- p.76Chapter 7. --- Conclusions --- p.80Chapter 7.1. --- Applications --- p.81Chapter 7.2. --- Future Directions --- p.81Chapter 7.2.1. --- SOM Generation --- p.81Chapter 7.2.2. --- Hybrid Architecture --- p.82References --- p.8
    corecore