13,990 research outputs found
Recommended from our members
Exploration of PET and MRI radiomic features for decoding breast cancer phenotypes and prognosis.
Radiomics is an emerging technology for imaging biomarker discovery and disease-specific personalized treatment management. This paper aims to determine the benefit of using multi-modality radiomics data from PET and MR images in the characterization breast cancer phenotype and prognosis. Eighty-four features were extracted from PET and MR images of 113 breast cancer patients. Unsupervised clustering based on PET and MRI radiomic features created three subgroups. These derived subgroups were statistically significantly associated with tumor grade (p = 2.0 × 10-6), tumor overall stage (p = 0.037), breast cancer subtypes (p = 0.0085), and disease recurrence status (p = 0.0053). The PET-derived first-order statistics and gray level co-occurrence matrix (GLCM) textural features were discriminative of breast cancer tumor grade, which was confirmed by the results of L2-regularization logistic regression (with repeated nested cross-validation) with an estimated area under the receiver operating characteristic curve (AUC) of 0.76 (95% confidence interval (CI) = [0.62, 0.83]). The results of ElasticNet logistic regression indicated that PET and MR radiomics distinguished recurrence-free survival, with a mean AUC of 0.75 (95% CI = [0.62, 0.88]) and 0.68 (95% CI = [0.58, 0.81]) for 1 and 2 years, respectively. The MRI-derived GLCM inverse difference moment normalized (IDMN) and the PET-derived GLCM cluster prominence were among the key features in the predictive models for recurrence-free survival. In conclusion, radiomic features from PET and MR images could be helpful in deciphering breast cancer phenotypes and may have potential as imaging biomarkers for prediction of breast cancer recurrence-free survival
Learning Convolutional Networks for Content-weighted Image Compression
Lossy image compression is generally formulated as a joint rate-distortion
optimization to learn encoder, quantizer, and decoder. However, the quantizer
is non-differentiable, and discrete entropy estimation usually is required for
rate control. These make it very challenging to develop a convolutional network
(CNN)-based image compression system. In this paper, motivated by that the
local information content is spatially variant in an image, we suggest that the
bit rate of the different parts of the image should be adapted to local
content. And the content aware bit rate is allocated under the guidance of a
content-weighted importance map. Thus, the sum of the importance map can serve
as a continuous alternative of discrete entropy estimation to control
compression rate. And binarizer is adopted to quantize the output of encoder
due to the binarization scheme is also directly defined by the importance map.
Furthermore, a proxy function is introduced for binary operation in backward
propagation to make it differentiable. Therefore, the encoder, decoder,
binarizer and importance map can be jointly optimized in an end-to-end manner
by using a subset of the ImageNet database. In low bit rate image compression,
experiments show that our system significantly outperforms JPEG and JPEG 2000
by structural similarity (SSIM) index, and can produce the much better visual
result with sharp edges, rich textures, and fewer artifacts
Neutron scattering measurements of phonons in nickel at elevated temperatures
Measurements of elastic and inelastic neutron scatterings from elemental nickel were made at 10, 300, 575, 875, and 1275 K. The phonon densities of states (DOSs) were calculated from the inelastic scattering and were fit with Born–von Kármán models of the lattice dynamics. With ancillary data on thermal expansion and elastic moduli, we found a small, negative anharmonic contribution to the phonon entropy at high temperature. We used this to place bounds on the magnetic entropy of nickel. A significant broadening of the phonon DOS at elevated temperatures, another indication of anharmonicity, was also measured and quantified
Non-local Attention Optimized Deep Image Compression
This paper proposes a novel Non-Local Attention Optimized Deep Image
Compression (NLAIC) framework, which is built on top of the popular variational
auto-encoder (VAE) structure. Our NLAIC framework embeds non-local operations
in the encoders and decoders for both image and latent feature probability
information (known as hyperprior) to capture both local and global
correlations, and apply attention mechanism to generate masks that are used to
weigh the features for the image and hyperprior, which implicitly adapt bit
allocation for different features based on their importance. Furthermore, both
hyperpriors and spatial-channel neighbors of the latent features are used to
improve entropy coding. The proposed model outperforms the existing methods on
Kodak dataset, including learned (e.g., Balle2019, Balle2018) and conventional
(e.g., BPG, JPEG2000, JPEG) image compression methods, for both PSNR and
MS-SSIM distortion metrics
A note on the evaluation of generative models
Probabilistic generative models can be used for compression, denoising,
inpainting, texture synthesis, semi-supervised learning, unsupervised feature
learning, and other tasks. Given this wide range of applications, it is not
surprising that a lot of heterogeneity exists in the way these models are
formulated, trained, and evaluated. As a consequence, direct comparison between
models is often difficult. This article reviews mostly known but often
underappreciated properties relating to the evaluation and interpretation of
generative models with a focus on image models. In particular, we show that
three of the currently most commonly used criteria---average log-likelihood,
Parzen window estimates, and visual fidelity of samples---are largely
independent of each other when the data is high-dimensional. Good performance
with respect to one criterion therefore need not imply good performance with
respect to the other criteria. Our results show that extrapolation from one
criterion to another is not warranted and generative models need to be
evaluated directly with respect to the application(s) they were intended for.
In addition, we provide examples demonstrating that Parzen window estimates
should generally be avoided
Weighted universal image compression
We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB
- …