1,593 research outputs found

    On gradient regularizers for MMD GANs

    Get PDF
    We propose a principled method for gradient-based regularization of the critic of GAN-like models trained by adversarially optimizing the kernel of a Maximum Mean Discrepancy (MMD). We show that controlling the gradient of the critic is vital to having a sensible loss function, and devise a method to enforce exact, analytical gradient constraints at no additional cost compared to existing approximate techniques based on additive regularizers. The new loss function is provably continuous, and experiments show that it stabilizes and accelerates training, giving image generation models that outperform state-of-the art methods on 160 × 160 CelebA and 64 × 64 unconditional ImageNet

    Learning deep kernels for exponential family densities

    Get PDF
    The kernel exponential family is a rich class of distributions, which can be fit efficiently and with statistical guarantees by score matching. Being required to choose a priori a simple kernel such as the Gaussian, however, limits its practical applicability. We provide a scheme for learning a kernel parameterized by a deep network, which can find complex location-dependent features of the local data geometry. This gives a very rich class of density models, capable of fitting complex structures on moderate-dimensional problems. Compared to deep density models fit via maximum likelihood, our approach provides a complementary set of strengths and tradeoffs: in empirical studies, deep maximum-likelihood models can yield higher likelihoods, while our approach gives better estimates of the gradient of the log density, the score, which describes the distribution's shape

    Infection of wheat tissues by Fusarium pseudograminearum

    Get PDF

    Proglacial Lakes Control Glacier Geometry and Behavior During Recession

    Get PDF
    Ice‐contact proglacial lakes are generally absent from numerical model simulations of glacier evolution, and their effects on ice dynamics and on rates of deglaciation remain poorly quantified. Using the BISICLES ice flow model, we analyzed the effects of an ice‐contact lake on the Pukaki Glacier, New Zealand, during recession from the Last Glacial Maximum. The ice‐contact lake produced a maximum effect on grounding line recession >4 times further and on ice velocities up to 8 times faster, compared to simulations of a land‐terminating glacier forced by the same climate. The lake contributed up to 82% of cumulative grounding line recession and 87% of ice velocity during the first 300 years of the simulations, but those values decreased to just 6% and 37%, respectively, after 5,000 years. Numerical models that ignore lake interactions will, therefore, misrepresent the rate of recession especially during the transition of a land‐terminating to a lake‐terminating environment

    Free energy basin-hopping

    Get PDF
    A global optimisation scheme is presented using basin-hopping with the acceptance criterion based on approximate free energy for the corresponding local minima of the potential energy. The method is illustrated for atomic and colloidal clusters and peptides to examine how the predicted global free energy minimum changes with temperature. Using estimates for the local free energies based on harmonic vibrational densities of states provides a computationally effective framework for predicting trends in structure at finite temperature. The resulting scheme represents a powerful tool for exploration of energy landscapes throughout molecular science.We are grateful to the EPSRC and the ERC for financial support under grants EP/1001352/1 and 267369, respectively.This is the final version of the article. It first appeared at http://www.sciencedirect.com/science/article/pii/S0009261415000974#

    Consciousness: the last 50 years(and the next)

    Get PDF
    The mind and brain sciences began with consciousness as a central concern. But for much of the 20th century, ideological and methodological concerns relegated its empirical study to the margins. Since the 1990s, studying consciousness has regained a legitimacy and momentum befitting its status as the primary feature of our mental lives. Nowadays, consciousness science encompasses a rich interdisciplinary mixture drawing together philosophical, theoretical, computational, experimental, and clinical perspectives, with neuroscience its central discipline. Researchers have learned a great deal about the neural mechanisms underlying global states of consciousness, distinctions between conscious and unconscious perception, and self-consciousness. Further progress will depend on specifying closer explanatory mappings between (first-person subjective) phenomenological descriptions and (third-person objective) descriptions of (embodied and embedded) neuronal mechanisms. Such progress will help reframe our understanding of our place in nature and accelerate clinical approaches to a wide range of psychiatric and neurological disorders

    Demystifying MMD GANs

    Get PDF
    We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramer GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training
    corecore