411,858 research outputs found
Computing contrast ratio in medical images using local content information
Rationale
Image quality assessment in medical applications is often based on quantifying the visibility between a structure
of interest such as a vessel, termed foreground (F) and its surrounding anatomical background (B), i.e., the
contrast ratio. A high quality image is the one that is able to make diagnostically relevant details distinguishable
from the background. Therefore, the computation of contrast ratio is an important task in automatic medical
image quality assessment.
Methods
We estimate the contrast ratio by using Weber’s law in local image patches. A small image patch can contain a
flat area, a textured area or an edge. Regions with edges are characterized by bimodal histograms representing B
and F, and the local contrast ratio can be estimated using the ratio between mean intensity values of each mode
of the histogram. B and F are identified by computing the mid-value between the modes using the ISODATA
algorithm. This process is performed over the entire image with a sliding window resulting in a contrast ratio per
pixel.
Results
We have tested our measure on two general purpose databases (TID2013 [1] and CSIQ [2]) to demonstrate that
the proposed measure agrees with human preferences of quality. Since our measure is specifically designed for
measuring contrast, only images exhibiting contrast changes are used. The difference between the maximum of
the contrast ratios corresponding to the reference and processed images is used as a quality predictor. Human
quality scores and our proposed measure are compared with the Pearson correlation coefficient. Our
experimental results show that our method is able to accurately predict changes of perceived quality due to
contrast decrements (Pearson correlations higher than 90%). Additionally, this method can detect changes in
contrast level in interventional x-ray images acquired with varying dose [3]. For instance, the resulting contrast
maps demonstrate reduced contrast ratios for vessel edges on X-ray images acquired at lower dose settings, i.e.,
lower distinguishability from the background, compared to higher dose acquisitions.
Conclusions
We propose a measure to compute contrast ratio by using Weber’s law in local image patches. While the
proposed contrast ratio is computationally simple, this approximation of local content has shown to be useful in
measuring quality differences due to contrast decrements in images. Especially, changes in structures of interest
due to low contrast ratio can be detected by using the contrast map making our method potentially useful in Xray
imaging dose control.
References
[1] Ponomarenko N. et al., “A New Color Image Database TID2013: Innovations and Results,” Proceedings of
ACIVS, 402-413 (2013).
[2] Larson E. and Chandler D., "Most apparent distortion: full-reference image quality assessment and the role of
strategy," Journal of Electronic Imaging, 19 (1), 2010.
[3] Kumcu, A. et al., “Interventional x-ray image quality measure based on a psychovisual detectability model,”
MIPS XVI, Ghent, Belgium, 2015
Influence of acquisition time on MR image quality estimated with nonparametric measures based on texture features
Correlation of parametrized image texture features (ITF) analyses conducted in different regions of interest (ROIs) overcomes limitations and reliably reflects image quality. +e aim of this study is to propose a nonparametrical method and classify the quality of a magnetic resonance (MR) image that has undergone controlled degradation by using textural features in the image. Images of 41 patients, 17 women and 24 men, aged between 23 and 56 years were analyzed. T2-weighted sagittal sequences of the
lumbar spine, cervical spine, and knee and T2-weighted coronal sequences of the shoulder and wrist were generated. The implementation of parallel imaging with the use of GRAPPA2, GRAPPA3, and GRAPPA4 led to a substantial reduction in the scanning time but also degraded image quality. The number of degraded image textural features was correlated with the scanning time. Longer scan times correlated with markedly higher ITF image persistence in comparison with images computed with reduced scan times. Higher ITF preservation was observed in images of bones in the spine and femur as compared to images of soft tissues, i.e., tendons and muscles. Finally, a nonparametrized image quality assessment based on an analysis of the ITF, computed for different tissues, correlating with the changes in acquisition time of the MRimages, was successfully developed. The correlation between acquisition time and the number of reproducible features present in an MR image was found to yield the necessary assumptions to calculate the quality index
Objective assessment of region of interest-aware adaptive multimedia streaming quality
Adaptive multimedia streaming relies on controlled
adjustment of content bitrate and consequent video quality variation in order to meet the bandwidth constraints of the communication
link used for content delivery to the end-user. The values of the easy to measure network-related Quality of Service metrics have no direct relationship with the way moving images are
perceived by the human viewer. Consequently variations in the video stream bitrate are not clearly linked to similar variation in the user perceived quality. This is especially true if some human visual system-based adaptation techniques are employed. As research has shown, there are certain image regions in each frame of a video sequence on which the users are more interested than in the others. This paper presents the Region of Interest-based Adaptive Scheme (ROIAS) which adjusts differently the regions within each frame of the streamed multimedia content based on the user interest in them. ROIAS is presented and discussed in terms of the adjustment algorithms employed and their impact on the human perceived video quality. Comparisons with existing approaches, including a constant quality adaptation scheme across the whole frame area, are performed employing two objective metrics which estimate user perceived video quality
A Comparison of Wavelet, Curvelet and Contourlet based Texture Classification Algorithms for Characterization of Bone Quality in Dental CT
Abstract: The objective of this paper is to design and implement classifier framework to assist the surgeon for preoperative assessment of bone quality from Dental Computed Tomography images. This article focuses on comparing the discriminating power of several multiresolution texture analysis methods to evaluate the quality of the bone based on the texture variations of the images obtained from the implant site using wavelet, curvelet and contourlet.The approach consists of three steps: automatic extraction of the most discriminative texture features from regions of interest, creation of a classifier that automatically grades the bone depends on the quality. Since this is medical domain, the validation against the human experts is carried out. The results indicate that the combination of the statistical and multiscale representation of the bone image gives adequate information to classify the different bone groups compared to gray level features at single scale
Recommended from our members
Objective Assessment of Image Quality: Extension of Numerical Observer Models to Multidimensional Medical Imaging Studies
Encompassing with fields on engineering and medical image quality, this dissertation proposes a novel framework for diagnostic performance evaluation based on objective image-quality assessment, an important step in the development of new imaging devices, acquisitions, or image-processing techniques being used for clinicians and researchers. The objective of this dissertation is to develop computational modeling tools that allow comprehensive evaluation of task-based assessment including clinical interpretation of images regardless of image dimensionality.
Because of advances in the development of medical imaging devices, several techniques have improved image quality where the format domain of the outcome images becomes multidimensional (e.g., 3D+time or 4D). To evaluate the performance of new imaging devices or to optimize various design parameters and algorithms, the quality measurement should be performed using an appropriate image-quality figure-of-merit (FOM). Classical FOM such as bias and variance, or mean-square error, have been broadly used in the past. Unfortunately, they do not reflect the fact that the average performance of the principal agent in medical decision-making is frequently a human observer, nor are they aware of the specific diagnostic task.
The standard goal for image quality assessment is a task-based approach in which one evaluates human observer performance of a specified diagnostic task (e.g. detection of the presence of lesions). However, having a human observer performs the tasks is costly and time-consuming. To facilitate practical task-based assessment of image quality, a numerical observer is required as a surrogate for human observers. Previously, numerical observers for the detection task have been studied both in research and industry; however, little research effort has been devoted toward development of one utilized for multidimensional imaging studies (e.g., 4D). Limiting the numerical observer tools that accommodate all information embedded in a series of images, the performance assessment of a particular new technique that generates multidimensional data is complex and limited. Consequently, key questions remain unanswered about how much the image quality improved using these new multidimensional images on a specific clinical task.
To address this gap, this dissertation proposes a new numerical-observer methodology to assess the improvement achieved from newly developed imaging technologies. This numerical observer approach can be generalized to exploit pertinent statistical information in multidimensional images and accurately predict the performance of a human observer over the complexity of the image domains. Part I of this dissertation aims to develop a numerical observer that accommodates multidimensional images to process correlated signal components and appropriately incorporate them into an absolute FOM. Part II of this dissertation aims to apply the model developed in Part I to selected clinical applications with multidimensional images including: 1) respiratory-gated positron emission tomography (PET) in lung cancer (3D+t), 2) kinetic parametric PET in head-and-neck cancer (3D+k), and 3) spectral computed tomography (CT) in atherosclerotic plaque (3D+e).
The author compares the task-based performance of the proposed approach to that of conventional methods, evaluated based on a broadly-used signal-known-exactly /background-known-exactly paradigm, which is in the context of the specified properties of a target object (e.g., a lesion) on highly realistic and clinical backgrounds. A realistic target object is generated with specific properties and applied to a set of images to create pathological scenarios for the performance evaluation, e.g., lesions in the lungs or plaques in the artery. The regions of interest (ROIs) of the target objects are formed over an ensemble of data measurements under identical conditions and evaluated for the inclusion of useful information from different complex domains (i.e., 3D+t, 3D+k, 3D+e). This work provides an image-quality assessment metric with no dimensional limitation that could help substantially improve assessment of performance achieved from new developments in imaging that make use of high dimensional data
Region of interest-based adaptive multimedia streaming scheme
Adaptive multimedia streaming aims at adjusting
the transmitted content based on the available bandwidth such as losses that often severely affect the end-user perceived quality are minimized and consequently the transmission quality increases. Current solutions affect equally the whole viewing area of the multimedia frames, despite research showing that there are regions on which the viewers are more interested in than on others. This paper presents a novel region of interest-based adaptive scheme (ROIAS) for multimedia streaming that when performing transmission-related quality adjustments, selectively affects the quality of those regions of the image the viewers are the least interested in. As the quality of the regions the viewers are the most interested in will not change (or will involve little change),the proposed scheme provides higher overall end-user perceived
quality than any of the existing adaptive solutions
Semantic Perceptual Image Compression using Deep Convolution Networks
It has long been considered a significant problem to improve the visual
quality of lossy image and video compression. Recent advances in computing
power together with the availability of large training data sets has increased
interest in the application of deep learning cnns to address image recognition
and image processing tasks. Here, we present a powerful cnn tailored to the
specific task of semantic image understanding to achieve higher visual quality
in lossy compression. A modest increase in complexity is incorporated to the
encoder which allows a standard, off-the-shelf jpeg decoder to be used. While
jpeg encoding may be optimized for generic images, the process is ultimately
unaware of the specific content of the image to be compressed. Our technique
makes jpeg content-aware by designing and training a model to identify multiple
semantic regions in a given image. Unlike object detection techniques, our
model does not require labeling of object positions and is able to identify
objects in a single pass. We present a new cnn architecture directed
specifically to image compression, which generates a map that highlights
semantically-salient regions so that they can be encoded at higher quality as
compared to background regions. By adding a complete set of features for every
class, and then taking a threshold over the sum of all feature activations, we
generate a map that highlights semantically-salient regions so that they can be
encoded at a better quality compared to background regions. Experiments are
presented on the Kodak PhotoCD dataset and the MIT Saliency Benchmark dataset,
in which our algorithm achieves higher visual quality for the same compressed
size.Comment: Accepted to Data Compression Conference, 11 pages, 5 figure
On-the-fly Data Assessment for High Throughput X-ray Diffraction Measurement
Investment in brighter sources and larger and faster detectors has
accelerated the speed of data acquisition at national user facilities. The
accelerated data acquisition offers many opportunities for discovery of new
materials, but it also presents a daunting challenge. The rate of data
acquisition far exceeds the current speed of data quality assessment, resulting
in less than optimal data and data coverage, which in extreme cases forces
recollection of data. Herein, we show how this challenge can be addressed
through development of an approach that makes routine data assessment automatic
and instantaneous. Through extracting and visualizing customized attributes in
real time, data quality and coverage, as well as other scientifically relevant
information contained in large datasets is highlighted. Deployment of such an
approach not only improves the quality of data but also helps optimize usage of
expensive characterization resources by prioritizing measurements of highest
scientific impact. We anticipate our approach to become a starting point for a
sophisticated decision-tree that optimizes data quality and maximizes
scientific content in real time through automation. With these efforts to
integrate more automation in data collection and analysis, we can truly take
advantage of the accelerating speed of data acquisition
- …