74 research outputs found

    Bayesian Inference for Generalized Linear Models for Spiking Neurons

    Get PDF
    Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate

    Inferring decoding strategy from choice probabilities in the presence of noise correlations

    Get PDF
    The activity of cortical neurons in sensory areas covaries with perceptual decisions, a relationship often quantified by choice probabilities. While choice probabilities have been measured extensively, their interpretation has remained fraught with difficulty. Here, we derive the mathematical relationship between choice probabilities, read-out weights and noise correlations within the standard neural decision making model. Our solution allows us to prove and generalize earlier observations based on numerical simulations, and to derive novel predictions. Importantly, we show how the read-out weight profile, or decoding strategy, can be inferred from experimentally measurable quantities. Furthermore, we present a test to decide whether the decoding weights of individual neurons are optimal, even without knowing the underlying noise correlations. We confirm the practical feasibility of our approach using simulated data from a realistic population model. Our work thus provides the theoretical foundation for a growing body of experimental results on choice probabilities and correlations

    Reconstructing Stimuli from the Spike Times of Leaky Integrate and Fire Neurons

    Get PDF
    Reconstructing stimuli from the spike trains of neurons is an important approach for understanding the neural code. One of the difficulties associated with this task is that signals which are varying continuously in time are encoded into sequences of discrete events or spikes. An important problem is to determine how much information about the continuously varying stimulus can be extracted from the time-points at which spikes were observed, especially if these time-points are subject to some sort of randomness. For the special case of spike trains generated by leaky integrate and fire neurons, noise can be introduced by allowing variations in the threshold every time a spike is released. A simple decoding algorithm previously derived for the noiseless case can be extended to the stochastic case, but turns out to be biased. Here, we review a solution to this problem, by presenting a simple yet efficient algorithm which greatly reduces the bias, and therefore leads to better decoding performance in the stochastic case

    Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

    Get PDF
    Generalized linear models are the most commonly used tools to describe the stimulus selectivity of sensory neurons. Here we present a Bayesian treatment of such models. Using the expectation propagation algorithm, we are able to approximate the full posterior distribution over all weights. In addition, we use a Laplacian prior to favor sparse solutions. Therefore, stimulus features that do not critically influence neural activity will be assigned zero weights and thus be effectively excluded by the model. This feature selection mechanism facilitates both the interpretation of the neuron model as well as its predictive abilities. The posterior distribution can be used to obtain confidence intervals which makes it possible to assess the statistical significance of the solution. In neural data analysis, the available amount of experimental measurements is often limited whereas the parameter space is large. In such a situation, both regularization by a sparsity prior and uncertainty estimates for the model parameters are essential. We apply our method to multi-electrode recordings of retinal ganglion cells and use our uncertainty estimate to test the statistical significance of functional couplings between neurons. Furthermore we used the sparsity of the Laplace prior to select those filters from a spike-triggered covariance analysis that are most informative about the neural response

    The HD(CP)² Observational Prototype Experiment (HOPE) – an overview

    Get PDF
    The HD(CP)2 Observational Prototype Experiment (HOPE) was performed as a major 2-month field experiment in Jülich, Germany, in April and May 2013, followed by a smaller campaign in Melpitz, Germany, in September 2013. HOPE has been designed to provide an observational dataset for a critical evaluation of the new German community atmospheric icosahedral non-hydrostatic (ICON) model at the scale of the model simulations and further to provide information on land-surface–atmospheric boundary layer exchange, cloud and precipitation processes, as well as sub-grid variability and microphysical properties that are subject to parameterizations. HOPE focuses on the onset of clouds and precipitation in the convective atmospheric boundary layer. This paper summarizes the instrument set-ups, the intensive observation periods, and example results from both campaigns. HOPE-Jülich instrumentation included a radio sounding station, 4 Doppler lidars, 4 Raman lidars (3 of them provide temperature, 3 of them water vapour, and all of them particle backscatter data), 1 water vapour differential absorption lidar, 3 cloud radars, 5 microwave radiometers, 3 rain radars, 6 sky imagers, 99 pyranometers, and 5 sun photometers operated at different sites, some of them in synergy. The HOPE-Melpitz campaign combined ground-based remote sensing of aerosols and clouds with helicopter- and balloon-based in situ observations in the atmospheric column and at the surface. HOPE provided an unprecedented collection of atmospheric dynamical, thermodynamical, and micro- and macrophysical properties of aerosols, clouds, and precipitation with high spatial and temporal resolution within a cube of approximately 10  ×  10  ×  10 km3. HOPE data will significantly contribute to our understanding of boundary layer dynamics and the formation of clouds and precipitation. The datasets have been made available through a dedicated data portal. First applications of HOPE data for model evaluation have shown a general agreement between observed and modelled boundary layer height, turbulence characteristics, and cloud coverage, but they also point to significant differences that deserve further investigations from both the observational and the modelling perspective

    Orbital reflectometry

    Full text link
    The occupation of d-orbitals controls the magnitude and anisotropy of the inter-atomic electron transfer in transition metal oxides and hence exerts a key influence on their chemical bonding and physical properties. Atomic-scale modulations of the orbital occupation at surfaces and interfaces are believed to be responsible for massive variations of the magnetic and transport properties, but could thus far not be probed in a quantitative manner. Here we show that it is possible to derive quantitative, spatially resolved orbital polarization profiles from soft x-ray reflectivity data, without resorting to model calculations. We demonstrate that the method is sensitive enough to resolve differences of 3 % in the occupation of Ni e_g orbitals in adjacent atomic layers of a LaNiO3-LaAlO3 superlattice, in good agreement with ab-initio electronic-structure calculations. The possibility to quantitatively correlate theory and experiment on the atomic scale opens up many new perspectives for orbital physics in d-electron materials

    The Added Value of Large-Eddy and Storm-Resolving Models for Simulating Clouds and Precipitation

    Get PDF
    More than one hundred days were simulated over very large domains with fine (0.156 km to 2.5 km) grid spacing for realistic conditions to test the hypothesis that storm (kilometer) and large-eddy (hectometer) resolving simulations would provide an improved representation of clouds and precipitation in atmospheric simulations. At scales that resolve convective storms (storm-resolving for short), the vertical velocity variance becomes resolved and a better physical basis is achieved for representing clouds and precipitation. Similarly to past studies we found an improved representation of precipitation at kilometer scales, as compared to models with parameterized convection. The main precipitation features (location, diurnal cycle and spatial propagation) are well captured already at kilometer scales, and refining resolution to hectometer scales does not substantially change the simulations in these respects. It does, however, lead to a reduction in the precipitation on the time-scales considered – most notably over the ocean in the tropics. Changes in the distribution of precipitation, with less frequent extremes are also found in simulations incorporating hectometer scales. Hectometer scales appear to be more important for the representation of clouds, and make it possible to capture many important aspects of the cloud field, from the vertical distribution of cloud cover, to the distribution of cloud sizes, and to the diel (daily) cycle. Qualitative improvements, particularly in the ability to differentiate cumulus from stratiform clouds, are seen when one reduces the grid spacing from kilometer to hectometer scales. At the hectometer scale new challenges arise, but the similarity of observed and simulated scales, and the more direct connection between the circulation and the unconstrained degrees of freedom make these challenges less daunting. This quality, combined with already improved simulation as compared to more parameterized models, underpins our conviction that the use and further development of storm-resolving models offers exciting opportunities for advancing understanding of climate and climate change
    • …
    corecore