1,069 research outputs found
SAGE: Sequential Attribute Generator for Analyzing Glioblastomas using Limited Dataset
While deep learning approaches have shown remarkable performance in many
imaging tasks, most of these methods rely on availability of large quantities
of data. Medical image data, however, is scarce and fragmented. Generative
Adversarial Networks (GANs) have recently been very effective in handling such
datasets by generating more data. If the datasets are very small, however, GANs
cannot learn the data distribution properly, resulting in less diverse or
low-quality results. One such limited dataset is that for the concurrent gain
of 19 and 20 chromosomes (19/20 co-gain), a mutation with positive prognostic
value in Glioblastomas (GBM). In this paper, we detect imaging biomarkers for
the mutation to streamline the extensive and invasive prognosis pipeline. Since
this mutation is relatively rare, i.e. small dataset, we propose a novel
generative framework - the Sequential Attribute GEnerator (SAGE), that
generates detailed tumor imaging features while learning from a limited
dataset. Experiments show that not only does SAGE generate high quality tumors
when compared to standard Deep Convolutional GAN (DC-GAN) and Wasserstein GAN
with Gradient Penalty (WGAN-GP), it also captures the imaging biomarkers
accurately
Deep learning in urban analysis for health
The application of deep learning to urban health analysis is in its early stages, but offers new and promising capabilities in using large image-based datasets to better understand the built environment and its effects on human health. This chapter will introduce and explore some of these capabilities, providing the allied design fields with a roadmap of this emerging area of research, its potentials, and current challenges. The chapter begins with a brief overview of existing research related to urban morphology and health, in which precedent work using traditional methods as well as deep learning are introduced. Next, research is presented demonstrating methods for the use of discriminative and generative deep learning processes for both urban health estimation and analysis. The chapter then concludes with a discussion of key challenges and directions for future work in this emerging field of research
The Unreasonable Effectiveness of Deep Features as a Perceptual Metric
While it is nearly effortless for humans to quickly assess the perceptual
similarity between two images, the underlying processes are thought to be quite
complex. Despite this, the most widely used perceptual metrics today, such as
PSNR and SSIM, are simple, shallow functions, and fail to account for many
nuances of human perception. Recently, the deep learning community has found
that features of the VGG network trained on ImageNet classification has been
remarkably useful as a training loss for image synthesis. But how perceptual
are these so-called "perceptual losses"? What elements are critical for their
success? To answer these questions, we introduce a new dataset of human
perceptual similarity judgments. We systematically evaluate deep features
across different architectures and tasks and compare them with classic metrics.
We find that deep features outperform all previous metrics by large margins on
our dataset. More surprisingly, this result is not restricted to
ImageNet-trained VGG features, but holds across different deep architectures
and levels of supervision (supervised, self-supervised, or even unsupervised).
Our results suggest that perceptual similarity is an emergent property shared
across deep visual representations.Comment: Accepted to CVPR 2018; Code and data available at
https://www.github.com/richzhang/PerceptualSimilarit
- …