7,137 research outputs found

    Stochastic Sensitivity Analysis and Kernel Inference via Distributional Data

    Get PDF
    AbstractCellular processes are noisy due to the stochastic nature of biochemical reactions. As such, it is impossible to predict the exact quantity of a molecule or other attributes at the single-cell level. However, the distribution of a molecule over a population is often deterministic and is governed by the underlying regulatory networks relevant to the cellular functionality of interest. Recent studies have started to exploit this property to infer network states. To facilitate the analysis of distributional data in a general experimental setting, we introduce a computational framework to efficiently characterize the sensitivity of distributional output to changes in external stimuli. Further, we establish a probability-divergence-based kernel regression model to accurately infer signal level based on distribution measurements. Our methodology is applicable to any biological system subject to stochastic dynamics and can be used to elucidate how population-based information processing may contribute to organism-level functionality. It also lays the foundation for engineering synthetic biological systems that exploit population decoding to more robustly perform various biocomputation tasks, such as disease diagnostics and environmental-pollutant sensing

    Probabilistic learning and computation in brains and machines

    Get PDF
    Humans and animals are able to solve a wide variety of perceptual, decision making and motor tasks with great exibility. Moreover, behavioural evidence shows that this exibility extends to situations where accuracy requires the correct treatment of uncertainty induced by noise and ambiguity in the available sensory information as well as noise internal to the brain. It has been suggested that this adequate handling of uncertainty is based on a learned internal model, e.g. in the case of perception, a generative model of sensory observations. Learning latent variable models and performing inference in them is a key challenge for both biological and arti cial learning systems. Here, we introduce a new approach to learning in hierarchical latent variable models called the Distributed Distributional Code Helmholtz Machine (DDC-HM), which emphasises exibility and accuracy in the inferential process. The approximate posterior over unobserved variables is represented implicitly as a set of expectations, corresponding to mean parameters of an exponential family distribution. To train the generative and recognition models we develop an extended wake-sleep algorithm inspired by the original Helmholtz Machine. As a result, the DDC-HM is able to learn hierarchical latent models without having to propagate gradients across di erent stochastic layers|making our approach biologically appealing. In the second part of the thesis, we review existing proposals for neural representations of uncertainty with a focus on representational and computational exibility as well as experimental support. Finally, we consider inference and learning in dynamical environment models using Distributed Distributional Codes to represent both the stochastic latent transition model and the inferred posterior distributions. We show that this model makes it possible to generalise successor representations to biologically more realistic, partially observed settings

    Discriminating Natural Image Statistics from Neuronal Population Codes

    Get PDF
    The power law provides an efficient description of amplitude spectra of natural scenes. Psychophysical studies have shown that the forms of the amplitude spectra are clearly related to human visual performance, indicating that the statistical parameters in natural scenes are represented in the nervous system. However, the underlying neuronal computation that accounts for the perception of the natural image statistics has not been thoroughly studied. We propose a theoretical framework for neuronal encoding and decoding of the image statistics, hypothesizing the elicited population activities of spatial-frequency selective neurons observed in the early visual cortex. The model predicts that frequency-tuned neurons have asymmetric tuning curves as functions of the amplitude spectra falloffs. To investigate the ability of this neural population to encode the statistical parameters of the input images, we analyze the Fisher information of the stochastic population code, relating it to the psychophysically measured human ability to discriminate natural image statistics. The nature of discrimination thresholds suggested by the computational model is consistent with experimental data from previous studies. Of particular interest, a reported qualitative disparity between performance in fovea and parafovea can be explained based on the distributional difference over preferred frequencies of neurons in the current model. The threshold shows a peak at a small falloff parameter when the neuronal preferred spatial frequencies are narrowly distributed, whereas the threshold peak vanishes for a neural population with a more broadly distributed frequency preference. These results demonstrate that the distributional property of neuronal stimulus preference can play a crucial role in linking microscopic neurophysiological phenomena and macroscopic human behaviors

    Renewable Energy Subsidies: Second-Best Policy or Fatal Aberration for Mitigation?

    Get PDF
    This paper evaluates the consequences of renewable energy policies on welfare, resource rents and energy costs in a world where carbon pricing is imperfect and the regulator seeks to limit emissions to a (cumulative) target. We use a global general equilibrium model with an intertemporal fossil resource sector. We calculate the optimal second-best renewable energy subsidy and compare the resulting welfare level with an efficient first-best carbon pricing policy. If carbon pricing is permanently missing, mitigation costs increase by a multiple (compared to the optimal carbon pricing policy) for a wide range of parameters describing extraction costs, renewable energy costs, substitution possibilities and normative attitudes. Furthermore, we show that small deviations from the second-best subsidy can lead to strong increases in emissions and consumption losses. This confirms the rising concerns about the occurrence of unintended side effects of climate policy { a new version of the green paradox. We extend our second-best analysis by considering two further types of policy instruments: (1) temporary subsidies that are displaced by carbon pricing in the long run and (2) revenue-neutral instruments like a carbon trust and a feed-in-tariff scheme. Although these instruments cause small welfare losses, they have the potential to ease distributional conflicts as they lead to lower energy prices and higher fossil resource rents than the optimal carbon pricing policy.Feed-in-Tariff, Carbon Trust, Carbon Pricing, Supply-Side Dynamics, Green Paradox, Climate Policy

    Evidence against the Detectability of a Hippocampal Place Code Using Functional Magnetic Resonance Imaging

    Get PDF
    Individual hippocampal neurons selectively increase their firing rates in specific spatial locations. As a population, these neurons provide a decodable representation of space that is robust against changes to sensory- and path-related cues. This neural code is sparse and distributed, theoretically rendering it undetectable with population recording methods such as functional magnetic resonance imaging (fMRI). Existing studies nonetheless report decoding spatial codes in the human hippocampus using such techniques. Here we present results from a virtual navigation experiment in humans in which we eliminated visual- and path-related confounds and statistical limitations present in existing studies, ensuring that any positive decoding results would represent a voxel-place code. Consistent with theoretical arguments derived from electrophysiological data and contrary to existing fMRI studies, our results show that although participants were fully oriented during the navigation task, there was no statistical evidence for a place code

    Nonparametric enrichment in computational and biological representations of distributions

    Get PDF
    This thesis proposes nonparametric techniques to enhance unsupervised learning methods in computational or biological contexts. Representations of intractable distributions and their relevant statistics are enhanced by nonparametric components trained to handle challenging estimation problems. The first part introduces a generic algorithm for learning generative latent variable models. In contrast to traditional variational learning, no representation for the intractable posterior distributions are computed, making it agnostic to the model structure and the support of latent variables. Kernel ridge regression is used to consistently estimate the gradient for learning. In many unsupervised tasks, this approach outperforms advanced alternatives based on the expectation-maximisation algorithm and variational approximate inference. In the second part, I train a model of data known as the kernel exponential family density. The kernel, used to describe smooth functions, is augmented by a parametric component trained using an efficient meta-learning procedure; meta-learning prevents overfitting as would occur using conventional routines. After training, the contours of the kernel become adaptive to the local geometry of the underlying density. Compared to maximum-likelihood learning, our method better captures the shape of the density, which is the desired quantity in many downstream applications. The final part sees how nonparametric ideas contribute to understanding uncertainty computation in the brain. First, I show that neural networks can learn to represent uncertainty using the distributed distributional code (DDC), a representation similar to the nonparametric kernel mean embedding. I then derive several DDC-based message-passing algorithms, including computations of filtering and real-time smoothing. The latter is a common neural computation embodied in many postdictive phenomena of perception in multiple modalities. The main idea behind these algorithms is least-squares regression, where the training data are simulated from an internal model. The internal model can be concurrently updated to follow the statistics in sensory stimuli, enabling adaptive inference
    • …
    corecore