789 research outputs found

    Overview of the 2005 cross-language image retrieval track (ImageCLEF)

    Get PDF
    The purpose of this paper is to outline efforts from the 2005 CLEF crosslanguage image retrieval campaign (ImageCLEF). The aim of this CLEF track is to explore the use of both text and content-based retrieval methods for cross-language image retrieval. Four tasks were offered in the ImageCLEF track: a ad-hoc retrieval from an historic photographic collection, ad-hoc retrieval from a medical collection, an automatic image annotation task, and a user-centered (interactive) evaluation task that is explained in the iCLEF summary. 24 research groups from a variety of backgrounds and nationalities (14 countries) participated in ImageCLEF. In this paper we describe the ImageCLEF tasks, submissions from participating groups and summarise the main fndings

    Vertebra Shape Classification using MLP for Content-Based Image Retrieval

    Get PDF
    A desirable content-based image retrieval (CBIR) system would classify extracted image features to support some form of semantic retrieval. The Lister Hill National Center for Biomedical Communications, an intramural R&D division of the National Library for Medicine (NLM), maintains an archive of digitized X-rays of the cervical and lumbar spine taken as part of the second national health and nutrition examination survey (NHANES II). It is our goal to provide shape-based access to digitized X-rays including retrieval on automatically detected and classified pathology, e.g., anterior osteophytes. This is done using radius of curvature analysis along the anterior portion, and morphological analysis for quantifying protrusion regions along the vertebra boundary. Experimental results are presented for the classification of 704 cervical spine vertebrae by evaluating the features using a multi-layer perceptron (MLP) based approach. In this paper, we describe the design and current status of the content-based image retrieval (CBIR) system and the role of neural networks in the design of an effective multimedia information retrieval system

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    A unified learning framework for content based medical image retrieval using a statistical model

    Get PDF
    AbstractThis paper presents a unified learning framework for heterogeneous medical image retrieval based on a Full Range Autoregressive Model (FRAR) with the Bayesian approach (BA). Using the unified framework, the color autocorrelogram, edge orientation autocorrelogram (EOAC) and micro-texture information of medical images are extracted. The EOAC is constructed in HSV color space, to circumvent the loss of edges due to spectral and chromatic variations. The proposed system employed adaptive binary tree based support vector machine (ABTSVM) for efficient and fast classification of medical images in feature vector space. The Manhattan distance measure of order one is used in the proposed system to perform a similarity measure in the classified and indexed feature vector space. The precision and recall (PR) method is used as a measure of performance in the proposed system. Short-term based relevance feedback (RF) mechanism is also adopted to reduce the semantic gap. The Experimental results reveal that the retrieval performance of the proposed system for heterogeneous medical image database is better than the existing systems at low computational and storage cost

    Data fusion techniques for biomedical informatics and clinical decision support

    Get PDF
    Data fusion can be used to combine multiple data sources or modalities to facilitate enhanced visualization, analysis, detection, estimation, or classification. Data fusion can be applied at the raw-data, feature-based, and decision-based levels. Data fusion applications of different sorts have been built up in areas such as statistics, computer vision and other machine learning aspects. It has been employed in a variety of realistic scenarios such as medical diagnosis, clinical decision support, and structural health monitoring. This dissertation includes investigation and development of methods to perform data fusion for cervical cancer intraepithelial neoplasia (CIN) and a clinical decision support system. The general framework for these applications includes image processing followed by feature development and classification of the detected region of interest (ROI). Image processing methods such as k-means clustering based on color information, dilation, erosion and centroid locating methods were used for ROI detection. The features extracted include texture, color, nuclei-based and triangle features. Analysis and classification was performed using feature- and decision-level data fusion techniques such as support vector machine, statistical methods such as logistic regression, linear discriminant analysis and voting algorithms --Abstract, page iv

    Benchmarking Encoder-Decoder Architectures for Biplanar X-ray to 3D Shape Reconstruction

    Full text link
    Various deep learning models have been proposed for 3D bone shape reconstruction from two orthogonal (biplanar) X-ray images. However, it is unclear how these models compare against each other since they are evaluated on different anatomy, cohort and (often privately held) datasets. Moreover, the impact of the commonly optimized image-based segmentation metrics such as dice score on the estimation of clinical parameters relevant in 2D-3D bone shape reconstruction is not well known. To move closer toward clinical translation, we propose a benchmarking framework that evaluates tasks relevant to real-world clinical scenarios, including reconstruction of fractured bones, bones with implants, robustness to population shift, and error in estimating clinical parameters. Our open-source platform provides reference implementations of 8 models (many of whose implementations were not publicly available), APIs to easily collect and preprocess 6 public datasets, and the implementation of automatic clinical parameter and landmark extraction methods. We present an extensive evaluation of 8 2D-3D models on equal footing using 6 public datasets comprising images for four different anatomies. Our results show that attention-based methods that capture global spatial relationships tend to perform better across all anatomies and datasets; performance on clinically relevant subgroups may be overestimated without disaggregated reporting; ribs are substantially more difficult to reconstruct compared to femur, hip and spine; and the dice score improvement does not always bring a corresponding improvement in the automatic estimation of clinically relevant parameters.Comment: accepted to NeurIPS 202

    3DBGrowth: volumetric vertebrae segmentation and reconstruction in magnetic resonance imaging

    Full text link
    Segmentation of medical images is critical for making several processes of analysis and classification more reliable. With the growing number of people presenting back pain and related problems, the semi-automatic segmentation and 3D reconstruction of vertebral bodies became even more important to support decision making. A 3D reconstruction allows a fast and objective analysis of each vertebrae condition, which may play a major role in surgical planning and evaluation of suitable treatments. In this paper, we propose 3DBGrowth, which develops a 3D reconstruction over the efficient Balanced Growth method for 2D images. We also take advantage of the slope coefficient from the annotation time to reduce the total number of annotated slices, reducing the time spent on manual annotation. We show experimental results on a representative dataset with 17 MRI exams demonstrating that our approach significantly outperforms the competitors and, on average, only 37% of the total slices with vertebral body content must be annotated without losing performance/accuracy. Compared to the state-of-the-art methods, we have achieved a Dice Score gain of over 5% with comparable processing time. Moreover, 3DBGrowth works well with imprecise seed points, which reduces the time spent on manual annotation by the specialist.Comment: This is a pre-print of an article published in Computer-Based Medical Systems. The final authenticated version is available online at: https://doi.org/10.1109/CBMS.2019.0009

    Content based retrieval of PET neurological images

    Get PDF
    Medical image management has posed challenges to many researchers, especially when the images have to be indexed and retrieved using their visual content that is meaningful to clinicians. In this study, an image retrieval system has been developed for 3D brain PET (Position emission tomography) images. It has been found that PET neurological images can be retrieved based upon their diagnostic status using only data pertaining to their content, and predominantly the visual content. During the study PET scans are spatially normalized, using existing techniques, and their visual data is quantified. The mid-sagittal-plane of each individual 3D PET scan is found and then utilized in the detection of abnormal asymmetries, such as tumours or physical injuries. All the asymmetries detected are referenced to the Talairarch and Tournoux anatomical atlas. The Cartesian co- ordinates in Talairarch space, of detected lesion, are employed along with the associated anatomical structure(s) as the indices within the content based image retrieval system. The anatomical atlas is then also utilized to isolate distinct anatomical areas that are related to a number of neurodegenerative disorders. After segmentation of the anatomical regions of interest algorithms are applied to characterize the texture of brain intensity using Gabor filters and to elucidate the mean index ratio of activation levels. These measurements are combined to produce a single feature vector that is incorporated into the content based image retrieval system. Experimental results on images with known diagnoses show that physical lesions such as head injuries and tumours can be, to a certain extent, detected correctly. Images with correctly detected and measured lesion are then retrieved from the database of images when a query pertains to the measured locale. Images with neurodegenerative disorder patterns have been indexed and retrieved via texture-based features. Retrieval accuracy is increased, for images from patients diagnosed with dementia, by combining the texture feature and mean index ratio value

    Framework for progressive segmentation of chest radiograph for efficient diagnosis of inert regions

    Get PDF
    Segmentation is one of the most essential steps required to identify the inert object in the chest x-ray. A review with the existing segmentation techniques towards chest x-ray as well as other vital organs was performed. The main objective was to find whether existing system offers accuracy at the cost of recursive and complex operations. The proposed system contributes to introduce a framework that can offer a good balance between computational performance and segmentation performance. Given an input of chest x-ray, the system offers progressive search for similar image on the basis of similarity score with queried image. Region-based shape descriptor is applied for extracting the feature exclusively for identifying the lung region from the thoracic region followed by contour adjustment. The final segmentation outcome shows accurate identification followed by segmentation of apical and costophrenic region of lung. Comparative analysis proved that proposed system offers better segmentation performance in contrast to existing system
    • …
    corecore