Histopathology relies on the analysis of microscopic tissue images to
diagnose disease. A crucial part of tissue preparation is staining whereby a
dye is used to make the salient tissue components more distinguishable.
However, differences in laboratory protocols and scanning devices result in
significant confounding appearance variation in the corresponding images. This
variation increases both human error and the inter-rater variability, as well
as hinders the performance of automatic or semi-automatic methods. In the
present paper we introduce an unsupervised adversarial network to translate
(and hence normalize) whole slide images across multiple data acquisition
domains. Our key contributions are: (i) an adversarial architecture which
learns across multiple domains with a single generator-discriminator network
using an information flow branch which optimizes for perceptual loss, and (ii)
the inclusion of an additional feature extraction network during training which
guides the transformation network to keep all the structural features in the
tissue image intact. We: (i) demonstrate the effectiveness of the proposed
method firstly on H\&E slides of 120 cases of kidney cancer, as well as (ii)
show the benefits of the approach on more general problems, such as flexible
illumination based natural image enhancement and light source adaptation