207,302 research outputs found
Recommended from our members
Consistency and Standardization of Color in Medical Imaging: a Consensus Report
This article summarizes the consensus reached at the Summit on Color in Medical Imaging held at the Food and Drug Administration (FDA) on May 8â9, 2013, co-sponsored by the FDA and ICC (International Color Consortium). The purpose of the meeting was to gather information on how color is currently handled by medical imaging systems to identify areas where there is a need for improvement, to define objective requirements, and to facilitate consensus development of best practices. Participants were asked to identify areas of concern and unmet needs. This summary documents the topics that were discussed at the meeting and recommendations that were made by the participants. Key areas identified where improvements in color would provide immediate tangible benefits were those of digital microscopy, telemedicine, medical photography (particularly ophthalmic and dental photography), and display calibration. Work in these and other related areas has been started within several professional groups, including the creation of the ICC Medical Imaging Working Group
An Improved Approach for Contrast Enhancement of Spinal Cord Images based on Multiscale Retinex Algorithm
This paper presents a new approach for contrast enhancement of spinal cord
medical images based on multirate scheme incorporated into multiscale retinex
algorithm. The proposed work here uses HSV color space, since HSV color space
separates color details from intensity. The enhancement of medical image is
achieved by down sampling the original image into five versions, namely, tiny,
small, medium, fine, and normal scale. This is due to the fact that the each
versions of the image when independently enhanced and reconstructed results in
enormous improvement in the visual quality. Further, the contrast stretching
and MultiScale Retinex (MSR) techniques are exploited in order to enhance each
of the scaled version of the image. Finally, the enhanced image is obtained by
combining each of these scales in an efficient way to obtain the composite
enhanced image. The efficiency of the proposed algorithm is validated by using
a wavelet energy metric in the wavelet domain. Reconstructed image using
proposed method highlights the details (edges and tissues), reduces image noise
(Gaussian and Speckle) and improves the overall contrast. The proposed
algorithm also enhances sharp edges of the tissue surrounding the spinal cord
regions which is useful for diagnosis of spinal cord lesions. Elaborated
experiments are conducted on several medical images and results presented show
that the enhanced medical pictures are of good quality and is found to be
better compared with other researcher methods.Comment: 13 pages, 6 figures, International Journal of Imaging and Robotics.
arXiv admin note: text overlap with arXiv:1406.571
Depth mapping of integral images through viewpoint image extraction with a hybrid disparity analysis algorithm
Integral imaging is a technique capable of displaying 3âD images with continuous parallax in full natural color. It is one of the most promising methods for producing smooth 3âD images. Extracting depth information from integral image has various applications ranging from remote inspection, robotic vision, medical imaging, virtual reality, to content-based image coding and manipulation for integral imaging based 3âD TV. This paper presents a method of generating a depth map from unidirectional integral images through viewpoint image extraction and using a hybrid disparity analysis algorithm combining multi-baseline, neighbourhood constraint and relaxation strategies. It is shown that a depth map having few areas of uncertainty can be obtained from both computer and photographically generated integral images using this approach. The acceptable depth maps can be achieved from photographic captured integral images containing complicated object scene
Automatic Classification of Bright Retinal Lesions via Deep Network Features
The diabetic retinopathy is timely diagonalized through color eye fundus
images by experienced ophthalmologists, in order to recognize potential retinal
features and identify early-blindness cases. In this paper, it is proposed to
extract deep features from the last fully-connected layer of, four different,
pre-trained convolutional neural networks. These features are then feeded into
a non-linear classifier to discriminate three-class diabetic cases, i.e.,
normal, exudates, and drusen. Averaged across 1113 color retinal images
collected from six publicly available annotated datasets, the deep features
approach perform better than the classical bag-of-words approach. The proposed
approaches have an average accuracy between 91.23% and 92.00% with more than
13% improvement over the traditional state of art methods.Comment: Preprint submitted to Journal of Medical Imaging | SPIE (Tue, Jul 28,
2017
Three-dimensional block matching using orthonormal tree-structured haar transform for multichannel images
Multichannel images, i.e., images of the same object or scene taken in different spectral bands or with different imaging modalities/settings, are common in many applications. For example, multispectral images contain several wavelength bands and hence, have richer information than color images. Multichannel magnetic resonance imaging and multichannel computed tomography images are common in medical imaging diagnostics, and multimodal images are also routinely used in art investigation. All the methods for grayscale images can be applied to multichannel images by processing each channel/band separately. However, it requires vast computational time, especially for the task of searching for overlapping patches similar to a given query patch. To address this problem, we propose a three-dimensional orthonormal tree-structured Haar transform (3D-OTSHT) targeting fast full search equivalent for three-dimensional block matching in multichannel images. The use of a three-dimensional integral image significantly saves time to obtain the 3D-OTSHT coefficients. We demonstrate superior performance of the proposed block matching
Gaussian mixture model based probabilistic modeling of images for medical image segmentation
In this paper, we propose a novel image segmentation algorithm that is based on the probability distributions of the object and background. It uses the variational level sets formulation with a novel region based term in addition to the edge-based term giving a complementary functional, that can potentially result in a robust segmentation of the images. The main theme of the method is that in most of the medical imaging scenarios, the objects are characterized by some typical characteristics such a color, texture, etc. Consequently, an image can be modeled as a Gaussian mixture of distributions corresponding to the object and background. During the procedure of curve evolution, a novel term is incorporated in the segmentation framework which is based on the maximization of the distance between the GMM corresponding to the object and background. The maximization of this distance using differential calculus potentially leads to the desired segmentation results. The proposed method has been used for segmenting images from three distinct imaging modalities i.e. magnetic resonance imaging (MRI), dermoscopy and chromoendoscopy. Experiments show the effectiveness of the proposed method giving better qualitative and quantitative results when compared with the current state-of-the-art. INDEX TERMS Gaussian Mixture Model, Level Sets, Active Contours, Biomedical Engineerin
Predicting optical coherence tomography-derived diabetic macular edema grades from fundus photographs using deep learning
Diabetic eye disease is one of the fastest growing causes of preventable
blindness. With the advent of anti-VEGF (vascular endothelial growth factor)
therapies, it has become increasingly important to detect center-involved
diabetic macular edema (ci-DME). However, center-involved diabetic macular
edema is diagnosed using optical coherence tomography (OCT), which is not
generally available at screening sites because of cost and workflow
constraints. Instead, screening programs rely on the detection of hard exudates
in color fundus photographs as a proxy for DME, often resulting in high false
positive or false negative calls. To improve the accuracy of DME screening, we
trained a deep learning model to use color fundus photographs to predict
ci-DME. Our model had an ROC-AUC of 0.89 (95% CI: 0.87-0.91), which corresponds
to a sensitivity of 85% at a specificity of 80%. In comparison, three retinal
specialists had similar sensitivities (82-85%), but only half the specificity
(45-50%, p<0.001 for each comparison with model). The positive predictive value
(PPV) of the model was 61% (95% CI: 56-66%), approximately double the 36-38% by
the retinal specialists. In addition to predicting ci-DME, our model was able
to detect the presence of intraretinal fluid with an AUC of 0.81 (95% CI:
0.81-0.86) and subretinal fluid with an AUC of 0.88 (95% CI: 0.85-0.91). The
ability of deep learning algorithms to make clinically relevant predictions
that generally require sophisticated 3D-imaging equipment from simple 2D images
has broad relevance to many other applications in medical imaging
A proposal of a standard rainbow false color scale for thermal medical images
Medical thermal imaging offers the opportunity of human body physiology monitoring. The frequent use of false color scales in those images has the objective of being a visual aid for human eye interpretation. However, several scales are being used, which may lead to different subjective interpretations. Is objective of this study to raise the need of uniformity in adoption of an internationally accepted standard false color scale and for that purpose a scale is proposed.info:eu-repo/semantics/publishedVersio
- âŠ