10,663 research outputs found
Improving Sampling from Generative Autoencoders with Markov Chains
We focus on generative autoencoders, such as variational or adversarial autoencoders, which jointly learn a generative model alongside an inference model. We define generative autoencoders as autoencoders which are trained to softly enforce a prior on the latent distribution learned by the model. However, the model does not necessarily learn to match the prior. We formulate a Markov chain Monte Carlo (MCMC) sampling process, equivalent to iteratively encoding and decoding, which allows us to sample from the learned latent distribution. Using this we can improve the quality of samples drawn from the model, especially when the learned distribution is far from the prior. Using MCMC sampling, we also reveal previously unseen differences between generative autoencoders trained either with or without the denoising criterion
Denoising Adversarial Autoencoders: Classifying Skin Lesions Using Limited Labelled Training Data
We propose a novel deep learning model for classifying medical images in the
setting where there is a large amount of unlabelled medical data available, but
labelled data is in limited supply. We consider the specific case of
classifying skin lesions as either malignant or benign. In this setting, the
proposed approach -- the semi-supervised, denoising adversarial autoencoder --
is able to utilise vast amounts of unlabelled data to learn a representation
for skin lesions, and small amounts of labelled data to assign class labels
based on the learned representation. We analyse the contributions of both the
adversarial and denoising components of the model and find that the combination
yields superior classification performance in the setting of limited labelled
training data.Comment: Under consideration for the IET Computer Vision Journal special issue
on "Computer Vision in Cancer Data Analysis
Inverting the generator of a generative adversarial network
Generative adversarial networks (GANs) learn a deep generative model that is able to synthesize novel, high-dimensional data samples. New data samples are synthesized by passing latent samples, drawn from a chosen prior distribution, through the generative model. Once trained, the latent space exhibits interesting properties that may be useful for downstream tasks such as classification or retrieval. Unfortunately, GANs do not offer an ``inverse model,'' a mapping from data space back to latent space, making it difficult to infer a latent representation for a given data sample. In this paper, we introduce a technique, inversion, to project data samples, specifically images, to the latent space using a pretrained GAN. Using our proposed inversion technique, we are able to identify which attributes of a data set a trained GAN is able to model and quantify GAN performance, based on a reconstruction loss. We demonstrate how our proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets. We provide codes for all of our experiments in the website (https://github.com/ToniCreswell/InvertingGAN)
Adversarial training for sketch retrieval
Generative Adversarial Networks (GAN) are able to learn excellent representations for unlabelled data which can be applied to image generation and scene classification. Representations learned by GANs have not yet been applied to retrieval. In this paper, we show that the representations learned by GANs can indeed be used for retrieval. We consider heritage documents that contain unlabelled Merchant Marks, sketch-like symbols that are similar to hieroglyphs. We introduce a novel GAN architecture with design features that make it suitable for sketch retrieval. The performance of this sketch-GAN is compared to a modified version of the original GAN architecture with respect to simple invariance properties. Experiments suggest that sketch-GANs learn representations that are suitable for retrieval and which also have increased stability to rotation, scale and translation compared to the standard GAN architecture
The Unfulfilled Potential of Data-Driven Decision Making in Agile Software Development
With the general trend towards data-driven decision making (DDDM),
organizations are looking for ways to use DDDM to improve their decisions.
However, few studies have looked into the practitioners view of DDDM, in
particular for agile organizations. In this paper we investigated the
experiences of using DDDM, and how data can improve decision making. An emailed
questionnaire was sent out to 124 industry practitioners in agile software
developing companies, of which 84 answered. The results show that few
practitioners indicated a widespread use of DDDM in their current decision
making practices. The practitioners were more positive to its future use for
higher-level and more general decision making, fairly positive to its use for
requirements elicitation and prioritization decisions, while being less
positive to its future use at the team level. The practitioners do see a lot of
potential for DDDM in an agile context; however, currently unfulfilled
Research into anxiety of childhood: playing catch-up (to Olympic standard)
This special issue is the culmination of an ESRC seminar series grant awarded to the authors of this editorial. We named the seminar series CATTS (Child Anxiety, Theory and Treatment Seminars) and it took the form of six highly stimulating, one-day seminars on the subject of child anxiety, with participants from clinical and academic backgrounds and from Great Britain, Europe, the USA and Australia. Most of the authors in this publication, and a sister special issue in Cognitions and Emotion (2008), participated in the CATTS series
Achieving Integration in Mixed Methods Designs—Principles and Practices
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/101791/1/hesr12117.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/101791/2/hesr12117-sup-0001-AuthorMatrix.pd
- …