3,094 research outputs found

    Vertebra Shape Classification using MLP for Content-Based Image Retrieval

    Get PDF
    A desirable content-based image retrieval (CBIR) system would classify extracted image features to support some form of semantic retrieval. The Lister Hill National Center for Biomedical Communications, an intramural R&D division of the National Library for Medicine (NLM), maintains an archive of digitized X-rays of the cervical and lumbar spine taken as part of the second national health and nutrition examination survey (NHANES II). It is our goal to provide shape-based access to digitized X-rays including retrieval on automatically detected and classified pathology, e.g., anterior osteophytes. This is done using radius of curvature analysis along the anterior portion, and morphological analysis for quantifying protrusion regions along the vertebra boundary. Experimental results are presented for the classification of 704 cervical spine vertebrae by evaluating the features using a multi-layer perceptron (MLP) based approach. In this paper, we describe the design and current status of the content-based image retrieval (CBIR) system and the role of neural networks in the design of an effective multimedia information retrieval system

    Overview of the 2005 cross-language image retrieval track (ImageCLEF)

    Get PDF
    The purpose of this paper is to outline efforts from the 2005 CLEF crosslanguage image retrieval campaign (ImageCLEF). The aim of this CLEF track is to explore the use of both text and content-based retrieval methods for cross-language image retrieval. Four tasks were offered in the ImageCLEF track: a ad-hoc retrieval from an historic photographic collection, ad-hoc retrieval from a medical collection, an automatic image annotation task, and a user-centered (interactive) evaluation task that is explained in the iCLEF summary. 24 research groups from a variety of backgrounds and nationalities (14 countries) participated in ImageCLEF. In this paper we describe the ImageCLEF tasks, submissions from participating groups and summarise the main fndings

    On the Use of XML in Medical Imaging Web-Based Applications

    Get PDF
    The rapid growth of digital technology in medical fields over recent years has increased the need for applications able to manage patient medical records, imaging data, and chart information. Web-based applications are implemented with the purpose to link digital databases, storage and transmission protocols, management of large volumes of data and security concepts, allowing the possibility to read, analyze, and even diagnose remotely from the medical center where the information was acquired. The objective of this paper is to analyze the use of the Extensible Markup Language (XML) language in web-based applications that aid in diagnosis or treatment of patients, considering how this protocol allows indexing and exchanging the huge amount of information associated with each medical case. The purpose of this paper is to point out the main advantages and drawbacks of the XML technology in order to provide key ideas for future web-based applicationsPeer ReviewedPostprint (author's final draft

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    A unified learning framework for content based medical image retrieval using a statistical model

    Get PDF
    AbstractThis paper presents a unified learning framework for heterogeneous medical image retrieval based on a Full Range Autoregressive Model (FRAR) with the Bayesian approach (BA). Using the unified framework, the color autocorrelogram, edge orientation autocorrelogram (EOAC) and micro-texture information of medical images are extracted. The EOAC is constructed in HSV color space, to circumvent the loss of edges due to spectral and chromatic variations. The proposed system employed adaptive binary tree based support vector machine (ABTSVM) for efficient and fast classification of medical images in feature vector space. The Manhattan distance measure of order one is used in the proposed system to perform a similarity measure in the classified and indexed feature vector space. The precision and recall (PR) method is used as a measure of performance in the proposed system. Short-term based relevance feedback (RF) mechanism is also adopted to reduce the semantic gap. The Experimental results reveal that the retrieval performance of the proposed system for heterogeneous medical image database is better than the existing systems at low computational and storage cost

    A graph-based approach for the retrieval of multi-modality medical images

    Get PDF
    Medical imaging has revolutionised modern medicine and is now an integral aspect of diagnosis and patient monitoring. The development of new imaging devices for a wide variety of clinical cases has spurred an increase in the data volume acquired in hospitals. These large data collections offer opportunities for search-based applications in evidence-based diagnosis, education, and biomedical research. However, conventional search methods that operate upon manual annotations are not feasible for this data volume. Content-based image retrieval (CBIR) is an image search technique that uses automatically derived visual features as search criteria and has demonstrable clinical benefits. However, very few studies have investigated the CBIR of multi-modality medical images, which are making a monumental impact in healthcare, e.g., combined positron emission tomography and computed tomography (PET-CT) for cancer diagnosis. In this thesis, we propose a new graph-based method for the CBIR of multi-modality medical images. We derive a graph representation that emphasises the spatial relationships between modalities by structurally constraining the graph based on image features, e.g., spatial proximity of tumours and organs. We also introduce a graph similarity calculation algorithm that prioritises the relationships between tumours and related organs. To enable effective human interpretation of retrieved multi-modality images, we also present a user interface that displays graph abstractions alongside complex multi-modality images. Our results demonstrated that our method achieved a high precision when retrieving images on the basis of tumour location within organs. The evaluation of our proposed UI design by user surveys revealed that it improved the ability of users to interpret and understand the similarity between retrieved PET-CT images. The work in this thesis advances the state-of-the-art by enabling a novel approach for the retrieval of multi-modality medical images

    Feedback-Driven Radiology Exam Report Retrieval with Semantics

    Get PDF
    Clinical documents are vital resources for radiologists to have a better understanding of patient history. The use of clinical documents can complement the often brief reasons for exams that are provided by physicians in order to perform more informed diagnoses. With the large number of study exams that radiologists have to perform on a daily basis, it becomes too time-consuming for radiologists to sift through each patient\u27s clinical documents. It is therefore important to provide a capability that can present contextually relevant clinical documents, and at the same time satisfy the diverse information needs among radiologists from different specialties. In this work, we propose a knowledge-based semantic similarity approach that uses domain-specific relationships such as part-of along with taxonomic relationships such as is-a to identify relevant radiology exam records. Our approach also incorporates explicit relevance feedback to personalize radiologists information needs. We evaluated our approach on a corpus of 6,265 radiology exam reports through study sessions with radiologists and demonstrated that the retrieval performance of our approach yields an improvement of 5% over the baseline. We further performed intra-class and inter-class similarities using a subset of 2,384 reports spanning across 10 exam codes. Our result shows that intra-class similarities are always higher than the inter-class similarities and our approach was able to obtain 6% percent improvement in intra-class similarities against the baseline. Our results suggest that the use of domain-specific relationships together with relevance feedback provides a significant value to improve the accuracy of the retrieval of radiology exam reports

    Generating semantically enriched diagnostics for radiological images using machine learning

    Get PDF
    Development of Computer Aided Diagnostic (CAD) tools to aid radiologists in pathology detection and decision making relies considerably on manually annotated images. With the advancement of deep learning techniques for CAD development, these expert annotations no longer need to be hand-crafted, however, deep learning algorithms require large amounts of data in order to generalise well. One way in which to access large volumes of expert-annotated data is through radiological exams consisting of images and reports. Using past radiological exams obtained from hospital archiving systems has many advantages: they are expert annotations available in large quantities, covering a population-representative variety of pathologies, and they provide additional context to pathology diagnoses, such as anatomical location and severity. Learning to auto-generate such reports from images presents many challenges such as the difficulty in representing and generating long, unstructured textual information, accounting for spelling errors and repetition or redundancy, and the inconsistency across different annotators. In this thesis, the problem of learning to automate disease detection from radiological exams is approached from three directions. Firstly, a report generation model is developed such that it is conditioned on radiological image features. Secondly, a number of approaches are explored aimed at extracting diagnostic information from free-text reports. Finally, an alternative approach to image latent space learning from current state-of-the-art is developed that can be applied to accelerated image acquisition.Open Acces
    • …
    corecore