317 research outputs found

    Learning to Convolve: A Generalized Weight-Tying Approach

    Get PDF
    Recent work (Cohen & Welling, 2016) has shown that generalizations of convolutions, based on group theory, provide powerful inductive biases for learning. In these generalizations, filters are not only translated but can also be rotated, flipped, etc. However, coming up with exact models of how to rotate a 3 x 3 filter on a square pixel-grid is difficult. In this paper, we learn how to transform filters for use in the group convolution, focussing on roto-translation. For this, we learn a filter basis and all rotated versions of that filter basis. Filters are then encoded by a set of rotation invariant coefficients. To rotate a filter, we switch the basis. We demonstrate we can produce feature maps with low sensitivity to input rotations, while achieving high performance on MNIST and CIFAR-10.Comment: Accepted to ICML 201

    Reversible GANs for Memory-efficient Image-to-Image Translation

    Full text link
    The Pix2pix and CycleGAN losses have vastly improved the qualitative and quantitative visual quality of results in image-to-image translation tasks. We extend this framework by exploring approximately invertible architectures which are well suited to these losses. These architectures are approximately invertible by design and thus partially satisfy cycle-consistency before training even begins. Furthermore, since invertible architectures have constant memory complexity in depth, these models can be built arbitrarily deep. We are able to demonstrate superior quantitative output on the Cityscapes and Maps datasets at near constant memory budget

    Interpretable Transformations with Encoder-Decoder Networks

    Full text link
    Deep feature spaces have the capacity to encode complex transformations of their input data. However, understanding the relative feature-space relationship between two transformed encoded images is difficult. For instance, what is the relative feature space relationship between two rotated images? What is decoded when we interpolate in feature space? Ideally, we want to disentangle confounding factors, such as pose, appearance, and illumination, from object identity. Disentangling these is difficult because they interact in very nonlinear ways. We propose a simple method to construct a deep feature space, with explicitly disentangled representations of several known transformations. A person or algorithm can then manipulate the disentangled representation, for example, to re-render an image with explicit control over parameterized degrees of freedom. The feature space is constructed using a transforming encoder-decoder network with a custom feature transform layer, acting on the hidden representations. We demonstrate the advantages of explicit disentangling on a variety of datasets and transformations, and as an aid for traditional tasks, such as classification.Comment: Accepted at ICCV 201

    Affine Self Convolution

    Get PDF
    Attention mechanisms, and most prominently self-attention, are a powerful building block for processing not only text but also images. These provide a parameter efficient method for aggregating inputs. We focus on self-attention in vision models, and we combine it with convolution, which as far as we know, are the first to do. What emerges is a convolution with data dependent filters. We call this an Affine Self Convolution. While this is applied differently at each spatial location, we show that it is translation equivariant. We also modify the Squeeze and Excitation variant of attention, extending both variants of attention to the roto-translation group. We evaluate these new models on CIFAR10 and CIFAR100 and show an improvement in the number of parameters, while reaching comparable or higher accuracy at test time against self-trained baselines

    Learning Likelihoods with Conditional Normalizing Flows

    Full text link
    Normalizing Flows (NFs) are able to model complicated distributions p(y) with strong inter-dimensional correlations and high multimodality by transforming a simple base density p(z) through an invertible neural network under the change of variables formula. Such behavior is desirable in multivariate structured prediction tasks, where handcrafted per-pixel loss-based methods inadequately capture strong correlations between output dimensions. We present a study of conditional normalizing flows (CNFs), a class of NFs where the base density to output space mapping is conditioned on an input x, to model conditional densities p(y|x). CNFs are efficient in sampling and inference, they can be trained with a likelihood-based objective, and CNFs, being generative flows, do not suffer from mode collapse or training instabilities. We provide an effective method to train continuous CNFs for binary problems and in particular, we apply these CNFs to super-resolution and vessel segmentation tasks demonstrating competitive performance on standard benchmark datasets in terms of likelihood and conventional metrics.Comment: 18 pages, 8 Tables, 9 Figures, Preprin

    Nutrition Education in Vermont Public Schools

    Get PDF
    Introduction. Despite positive changes, childhood obesity and food insecurity remain prevalent across the country. Vermont is not immune to these issues. We set out to: research the level of nutrition education Vermont elementary schools provide their students, understand teacher perceptions of these programs, and recommend ways to fill identified gaps. Methods. Our study is a cross-sectional survey of Vermont educators around nutrition education. The survey consisted of 17 questions, used LimeSurvey, and included demographic and nutrition education questions. The survey was distributed statewide through newsletters and list-servers. Results. 64 responses met inclusion criteria. Vermont elementary school (K-6) teachers report a mean satisfaction score of 2.51 out of 5.0 for their schools\u27 current nutrition education programs. School nurses reported a score of 2.5 out of 5.0. Highest satisfaction scores included school administrators and health and wellness coordinators (3.3 out of 5.0). When comparing teachers to non-classroom educators (administrators and nutrition educators) data showed a significant difference between high satisfaction (3-5) and low satisfaction (1-2); (Fischer p = 0.009). Overall, Vermont elementary school teachers report a high level of knowledge about nutrition, (4.1/5.0), but a lower level of understanding in their students (2.5/5.0). Conclusions. Given teacher perceptions regarding current school nutrition education programs, development and implementation of a state-wide nutrition education curriculum with dedicated teaching time may be warranted. Programs recommended by the CDC include Eat Well & Get Moving and Planet Health, designed by the Harvard School of Public Health. These could be adapted as a framework for Vermont.https://scholarworks.uvm.edu/comphp_gallery/1244/thumbnail.jp

    Supervised Uncertainty Quantification for Segmentation with Multiple Annotations

    Full text link
    The accurate estimation of predictive uncertainty carries importance in medical scenarios such as lung node segmentation. Unfortunately, most existing works on predictive uncertainty do not return calibrated uncertainty estimates, which could be used in practice. In this work we exploit multi-grader annotation variability as a source of 'groundtruth' aleatoric uncertainty, which can be treated as a target in a supervised learning problem. We combine this groundtruth uncertainty with a Probabilistic U-Net and test on the LIDC-IDRI lung nodule CT dataset and MICCAI2012 prostate MRI dataset. We find that we are able to improve predictive uncertainty estimates. We also find that we can improve sample accuracy and sample diversity. In real-world applications, our method could inform doctors about the confidence of the segmentation results.Comment: MICCAI 2019. Fixed a few typo
    • …
    corecore