509 research outputs found
Modeling human observer detection in undersampled magnetic resonance imaging (MRI) reconstruction with total variation and wavelet sparsity regularization
Purpose: Task-based assessment of image quality in undersampled magnetic
resonance imaging provides a way of evaluating the impact of regularization on
task performance. In this work, we evaluated the effect of total variation (TV)
and wavelet regularization on human detection of signals with a varying
background and validated a model observer in predicting human performance.
Approach: Human observer studies used two-alternative forced choice (2-AFC)
trials with a small signal known exactly task but with varying backgrounds for
fluid-attenuated inversion recovery images reconstructed from undersampled
multi-coil data. We used a 3.48 undersampling factor with TV and a wavelet
sparsity constraints. The sparse difference-of-Gaussians (S-DOG) observer with
internal noise was used to model human observer detection.
Results: We observed a trend that the human observer detection performance
remained fairly constant for a broad range of values in the regularization
parameter before decreasing at large values. A similar result was found for the
normalized ensemble root mean squared error. Without changing the internal
noise, the model observer tracked the performance of the human observers as the
regularization was increased but overestimated the PC for large amounts of
regularization for TV and wavelet sparsity, as well as the combination of both
parameters.
Conclusions: For the task we studied, the S-DOG observer was able to
reasonably predict human performance with both TV and wavelet sparsity
regularizers over a broad range of regularization parameters. We observed a
trend that task performance remained fairly constant for a range of
regularization parameters before decreasing for large amounts of
regularization
Numerical Stability issues on Channelized Hotelling Observer under different background assumptions
International audienceThis paper addresses the numerical stability issue on the channelized Hotelling observer (CHO). The CHO is a well-known approach in the medical image quality assessment domain. Many researchers have found that the detection performance of the CHO does not increase with the number of channels, contrary to expectation. And to our knowledge, nobody in this domain has found the reason. We illustrated that this is due to the ill-posed problem of the scatter matrix and proposed a solution based on Tikhonov regularization. Although Tikhonov regularization has been used in many other domains, we show in this paper another important application of Tikhonov regularization. This is very important for researchers to continue the CHO (and other channelized model observer) investigation with a reliable detection performance calculation
Optimization of GPU-Accelerated Iterative CT Reconstruction Algorithm for Clinical Use
In order to transition the GPU-accelerated CT reconstruction algorithm to a more clinical environment, a graphical user interface is implemented. Some optimization methods on the implementation are presented. We describe the alternating minimization (AM) algorithm as the updating algorithm, and the branchless distance-driven method for the system forward operator. We introduce a version of the Feldkamp-Davis-Kress algorithm to generate the initial image for our alternating minimization algorithm and compare it to a choice of a constant initial image. For the sake of better rate of convergence, we introduce the ordered-subsets method, find the optimal number of ordered subsets, and discuss the possibility of using a hybrid ordered-subsets method. Based on the run-time analysis, we implement a GPU-accelerated combination and accumulation process using a Hillis-Steele scan and shared memory. We then analyze some code-related problems, which indicate that our implementation of the AM algorithm may reach the limit of single precision after approximately 3,500 iterations. The Hotelling observer, as an estimation of the human observer, is introduced to assess the image quality of reconstructed images. The estimation of human observer performance may enable us to optimize the algorithm parameters with respect to clinical use
DEMIST: A deep-learning-based task-specific denoising approach for myocardial perfusion SPECT
There is an important need for methods to process myocardial perfusion
imaging (MPI) SPECT images acquired at lower radiation dose and/or acquisition
time such that the processed images improve observer performance on the
clinical task of detecting perfusion defects. To address this need, we build
upon concepts from model-observer theory and our understanding of the human
visual system to propose a Detection task-specific deep-learning-based approach
for denoising MPI SPECT images (DEMIST). The approach, while performing
denoising, is designed to preserve features that influence observer performance
on detection tasks. We objectively evaluated DEMIST on the task of detecting
perfusion defects using a retrospective study with anonymized clinical data in
patients who underwent MPI studies across two scanners (N = 338). The
evaluation was performed at low-dose levels of 6.25%, 12.5% and 25% and using
an anthropomorphic channelized Hotelling observer. Performance was quantified
using area under the receiver operating characteristics curve (AUC). Images
denoised with DEMIST yielded significantly higher AUC compared to corresponding
low-dose images and images denoised with a commonly used task-agnostic DL-based
denoising method. Similar results were observed with stratified analysis based
on patient sex and defect type. Additionally, DEMIST improved visual fidelity
of the low-dose images as quantified using root mean squared error and
structural similarity index metric. A mathematical analysis revealed that
DEMIST preserved features that assist in detection tasks while improving the
noise properties, resulting in improved observer performance. The results
provide strong evidence for further clinical evaluation of DEMIST to denoise
low-count images in MPI SPECT
On the impact of incorporating task-information in learning-based image denoising
A variety of deep neural network (DNN)-based image denoising methods have
been proposed for use with medical images. These methods are typically trained
by minimizing loss functions that quantify a distance between the denoised
image, or a transformed version of it, and the defined target image (e.g., a
noise-free or low-noise image). They have demonstrated high performance in
terms of traditional image quality metrics such as root mean square error
(RMSE), structural similarity index measure (SSIM), or peak signal-to-noise
ratio (PSNR). However, it has been reported recently that such denoising
methods may not always improve objective measures of image quality. In this
work, a task-informed DNN-based image denoising method was established and
systematically evaluated. A transfer learning approach was employed, in which
the DNN is first pre-trained by use of a conventional (non-task-informed) loss
function and subsequently fine-tuned by use of the hybrid loss that includes a
task-component. The task-component was designed to measure the performance of a
numerical observer (NO) on a signal detection task. The impact of network depth
and constraining the fine-tuning to specific layers of the DNN was explored.
The task-informed training method was investigated in a stylized low-dose X-ray
computed tomography (CT) denoising study for which binary signal detection
tasks under signal-known-statistically (SKS) with
background-known-statistically (BKS) conditions were considered. The impact of
changing the specified task at inference time to be different from that
employed for model training, a phenomenon we refer to as "task-shift", was also
investigated. The presented results indicate that the task-informed training
method can improve observer performance while providing control over the trade
off between traditional and task-based measures of image quality
Terahertz Security Image Quality Assessment by No-reference Model Observers
To provide the possibility of developing objective image quality assessment
(IQA) algorithms for THz security images, we constructed the THz security image
database (THSID) including a total of 181 THz security images with the
resolution of 127*380. The main distortion types in THz security images were
first analyzed for the design of subjective evaluation criteria to acquire the
mean opinion scores. Subsequently, the existing no-reference IQA algorithms,
which were 5 opinion-aware approaches viz., NFERM, GMLF, DIIVINE, BRISQUE and
BLIINDS2, and 8 opinion-unaware approaches viz., QAC, SISBLIM, NIQE, FISBLIM,
CPBD, S3 and Fish_bb, were executed for the evaluation of the THz security
image quality. The statistical results demonstrated the superiority of Fish_bb
over the other testing IQA approaches for assessing the THz image quality with
PLCC (SROCC) values of 0.8925 (-0.8706), and with RMSE value of 0.3993. The
linear regression analysis and Bland-Altman plot further verified that the
Fish__bb could substitute for the subjective IQA. Nonetheless, for the
classification of THz security images, we tended to use S3 as a criterion for
ranking THz security image grades because of the relatively low false positive
rate in classifying bad THz image quality into acceptable category (24.69%).
Interestingly, due to the specific property of THz image, the average pixel
intensity gave the best performance than the above complicated IQA algorithms,
with the PLCC, SROCC and RMSE of 0.9001, -0.8800 and 0.3857, respectively. This
study will help the users such as researchers or security staffs to obtain the
THz security images of good quality. Currently, our research group is
attempting to make this research more comprehensive.Comment: 13 pages, 8 figures, 4 table
UNet and MobileNet CNN-based model observers for CT protocol optimization: comparative performance evaluation by means of phantom CT images
Purpose: The aim of this work is the development and characterization of a model observer (MO) based on convolutional neural networks (CNNs), trained to mimic human observers in image evaluation in terms of detection and localization of low-contrast objects in CT scans acquired on a reference phantom. The final goal is automatic image quality evaluation and CT protocol optimization to fulfill the ALARA principle. Approach: Preliminary work was carried out to collect localization confidence ratings of human observers for signal presence/absence from a dataset of 30,000 CT images acquired on a PolyMethyl MethAcrylate phantom containing inserts filled with iodinated contrast media at different concentrations. The collected data were used to generate the labels for the training of the artificial neural networks. We developed and compared two CNN architectures based respectively on Unet and MobileNetV2, specifically adapted to achieve the double tasks of classification and localization. The CNN evaluation was performed by computing the area under localization-ROC curve (LAUC) and accuracy metrics on the test dataset. Results: The mean of absolute percentage error between the LAUC of the human observer and MO was found to be below 5% for the most significative test data subsets. An elevated inter-rater agreement was achieved in terms of S-statistics and other common statistical indices. Conclusions: Very good agreement was measured between the human observer and MO, as well as between the performance of the two algorithms. Therefore, this work is highly supportive of the feasibility of employing CNN-MO combined with a specifically designed phantom for CT protocol optimization programs
- …