1,101 research outputs found

    Recurrent Spatial Transformer Networks

    Get PDF
    We integrate the recently proposed spatial transformer network (SPN) [Jaderberg et. al 2015] into a recurrent neural network (RNN) to form an RNN-SPN model. We use the RNN-SPN to classify digits in cluttered MNIST sequences. The proposed model achieves a single digit error of 1.5% compared to 2.9% for a convolutional networks and 2.0% for convolutional networks with SPN layers. The SPN outputs a zoomed, rotated and skewed version of the input image. We investigate different down-sampling factors (ratio of pixel in input and output) for the SPN and show that the RNN-SPN model is able to down-sample the input images without deteriorating performance. The down-sampling in RNN-SPN can be thought of as adaptive down-sampling that minimizes the information loss in the regions of interest. We attribute the superior performance of the RNN-SPN to the fact that it can attend to a sequence of regions of interest

    Auxiliary Deep Generative Models

    Get PDF
    Deep generative models parameterized by neural networks have recently achieved state-of-the-art performance in unsupervised and semi-supervised learning. We extend deep generative models with auxiliary variables which improves the variational approximation. The auxiliary variables leave the generative model unchanged but make the variational distribution more expressive. Inspired by the structure of the auxiliary variable we also propose a model with two stochastic layers and skip connections. Our findings suggest that more expressive and properly specified deep generative models converge faster with better results. We show state-of-the-art performance within semi-supervised learning on MNIST, SVHN and NORB datasets.Comment: Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 2016, JMLR: Workshop and Conference Proceedings volume 48, Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 201

    Life Cycle Savings, Bequest, and the Diminishing Impact of Scale on Growth

    Get PDF
    There appears to be ample evidence that the size of population acted as a stimulus to growth in historical times; scale mattered. In the post World War II era, however, there is little evidence of such scale effects on growth. Where did the scale effect go? The present paper shows that the savings motive critically affects the size and sign of scale effects in standard endogenous growth models. If the bequest motive dominates, the scale effect is positive. If the life cycle motive dominates, the scale effect is ambiguous and may be negative. A declining importance of bequest in capital accumulation could therefore be one reason why scale seems to matter less today than in historical times.overlapping generations; endogenous growth; scale effects

    ‘Reconstructing Centrality and Peripherality in the North Denmark Region:A Question of Scale and Typology’

    Get PDF

    Robust Comparative Statics in Large Dynamic Economies

    Get PDF
    We consider infinite horizon economies populated by a continuum of agents who are subject to idiosyncratic shocks. This framework contains models of saving and capital accumulation with incomplete markets in the spirit of works by Bewley, Aiyagari, and Huggett, and models of entry, exit and industry dynamics in the spirit of Hopenhayn's work as special cases. Robust and easy-to-apply comparative statics results are established with respect to exogenous parameters as well as various kinds of changes in the Markov processes governing the law of motion of the idiosyncratic shocks. These results complement the existing literature which uses simulations and numerical analysis to study this class of models and are illustrated using a number of examples

    Autoencoding beyond pixels using a learned similarity metric

    Get PDF
    We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic
    • …
    corecore