2,805 research outputs found
Qualitative grading of aortic regurgitation: a pilot study comparing CMR 4D flow and echocardiography.
Over the past 10 years there has been intense research in the development of volumetric visualization of intracardiac flow by cardiac magnetic resonance (CMR).This volumetric time resolved technique called CMR 4D flow imaging has several advantages over standard CMR. It offers anatomical, functional and flow information in a single free-breathing, ten-minute acquisition. However, the data obtained is large and its processing requires dedicated software. We evaluated a cloud-based application package that combines volumetric data correction and visualization of CMR 4D flow data, and assessed its accuracy for the detection and grading of aortic valve regurgitation using transthoracic echocardiography as reference. Between June 2014 and January 2015, patients planned for clinical CMR were consecutively approached to undergo the supplementary CMR 4D flow acquisition. Fifty four patients(median age 39 years, 32 males) were included. Detection and grading of the aortic valve regurgitation using CMR4D flow imaging were evaluated against transthoracic echocardiography. The agreement between 4D flow CMR and transthoracic echocardiography for grading of aortic valve regurgitation was good (j = 0.73). To identify relevant,more than mild aortic valve regurgitation, CMR 4D flow imaging had a sensitivity of 100 % and specificity of 98 %. Aortic regurgitation can be well visualized, in a similar manner as transthoracic echocardiography, when using CMR 4D flow imaging
Rule-Based combination of video quality metrics
Lately, several algorithms have been proposed to automatically estimate the quality of video sequences, even some have been included in international standards. However, the majority only provide high performance under particular conditions and with certain types of degradations. Therefore, some proposals have been presented setting out the combination of various quality metrics to improve the performance and the range of application. In this paper, a rule-based combination of standardized metrics is presented, in contrast to most of these type of approaches based on combinational models. The proposed system consists of a first stage in which the type of degradation affecting the video quality is identified to be caused by coding impairments or transmission errors. Then, the most appropriate metric for that distortion is applied. Specifically, VQM and VQuad have been considered for coding and transmission distortions, respectively. The results show that the overall performance is better than using the quality metrics individually
No-reference image quality assessment through the von Mises distribution
An innovative way of calculating the von Mises distribution (VMD) of image
entropy is introduced in this paper. The VMD's concentration parameter and some
fitness parameter that will be later defined, have been analyzed in the
experimental part for determining their suitability as a image quality
assessment measure in some particular distortions such as Gaussian blur or
additive Gaussian noise. To achieve such measure, the local R\'{e}nyi entropy
is calculated in four equally spaced orientations and used to determine the
parameters of the von Mises distribution of the image entropy. Considering
contextual images, experimental results after applying this model show that the
best-in-focus noise-free images are associated with the highest values for the
von Mises distribution concentration parameter and the highest approximation of
image data to the von Mises distribution model. Our defined von Misses fitness
parameter experimentally appears also as a suitable no-reference image quality
assessment indicator for no-contextual images.Comment: 29 pages, 11 figure
Recommended from our members
Visibility metrics and their applications in visually lossless image compression
Visibility metrics are image metrics that predict the probability that a human observer can detect differences between a pair of images. These metrics can provide localized information in the form of visibility maps, in which each value represents a probability of detection. An important application of the visibility metric is visually lossless image compression that aims at compressing a given image to the lowest fraction of bit per pixel while keeping the compression artifacts invisible at the same time.
In previous works, most visibility metrics were modeled based on largely simplified assumptions and mathematical models of human visual systems. This approach generally fits well into experimental data measured with simple stimuli, such as Gabor patches. However, it cannot predict complex non-linear effects, such as contrast masking in natural images, particularly well. To predict visibility of image differences accurately, we collected the largest visibility dataset under fixed viewing conditions for calibrating existing visibility metrics and proposed a deep neural network-based visibility metric. We demonstrated in our experiments that the deep neural network-based visibility metric significantly outperformed existing visibility metrics.
However, the deep neural network-based visibility metric cannot predict visibility under varying viewing conditions, such as display brightness and viewing distances that have great impacts on the visibility of distortions. To extend the deep neural network-based visibility metric to varying viewing conditions, we collected the largest visibility dataset under varying display brightness and viewing distances. We proposed incorporating white-box modules, in other words, luminance masking and viewing distance adaptation, into the black-box deep neural network, and we found that the combination of white-box modules and black-box deep neural networks could generalize our proposed visibility metric to varying viewing conditions.
To demonstrate the application of our proposed deep neural network-based visibility metric to visually lossless image compression, we collected the visually lossless image compression dataset under fixed viewing conditions and significantly improved the deep neural network-based visibility metric's accuracy of predicting visually lossless image compression threshold by pre-training the visibility metric with a synthetic dataset generated by the state-of-the-art white-box visibility metric---HDR-VDP \cite{Mantiuk2011}. In a large-scale study of 1000 images, we found that with our improved visibility metric, we can save around 60\% to 70\% bits for visually lossless image compression encoding as compared to the default visually lossless quality level of 90.
Because predicting image visibility and predicting image quality are closely related research topics, we also proposed a trained perceptually uniform transform for high dynamic range images and videos quality assessments by training a perceptual encoding function on a set of subjective quality assessment datasets. We have shown that when combining the trained perceptual encoding function with standard dynamic range image quality metrics, such as peak-signal-noise-ratio (PSNR), better performance was achieved compared to the untrained version
- …