1,466 research outputs found
PathologyGAN: Learning deep representations of cancer tissue
We apply Generative Adversarial Networks (GANs) to the domain of digital
pathology. Current machine learning research for digital pathology focuses on
diagnosis, but we suggest a different approach and advocate that generative
models could drive forward the understanding of morphological characteristics
of cancer tissue. In this paper, we develop a framework which allows GANs to
capture key tissue features and uses these characteristics to give structure to
its latent space. To this end, we trained our model on 249K H&E breast cancer
tissue images, extracted from 576 TMA images of patients from the Netherlands
Cancer Institute (NKI) and Vancouver General Hospital (VGH) cohorts. We show
that our model generates high quality images, with a Frechet Inception Distance
(FID) of 16.65. We further assess the quality of the images with cancer tissue
characteristics (e.g. count of cancer, lymphocytes, or stromal cells), using
quantitative information to calculate the FID and showing consistent
performance of 9.86. Additionally, the latent space of our model shows an
interpretable structure and allows semantic vector operations that translate
into tissue feature transformations. Furthermore, ratings from two expert
pathologists found no significant difference between our generated tissue
images from real ones. The code, generated images, and pretrained model are
available at https://github.com/AdalbertoCq/Pathology-GANComment: MIDL 2020 final versio
Deep Learning as a Parton Shower
We make the connection between certain deep learning architectures and the
renormalisation group explicit in the context of QCD by using a deep learning
network to construct a toy parton shower model. The model aims to describe
proton-proton collisions at the Large Hadron Collider. A convolutional
autoencoder learns a set of kernels that efficiently encode the behaviour of
fully showered QCD collision events. The network is structured recursively so
as to ensure self-similarity, and the number of trained network parameters is
low. Randomness is introduced via a novel custom masking layer, which also
preserves existing parton splittings by using layer-skipping connections. By
applying a shower merging procedure, the network can be evaluated on unshowered
events produced by a matrix element calculation. The trained network behaves as
a parton shower that qualitatively reproduces jet-based observables.Comment: 26 pages, 13 figure
- …