404 research outputs found

    Effect of generation on the electronic properties of light-emitting dendrimers

    Get PDF
    We have compared the optical and electronic properties of a series of porphyrin centred dendrimers containing stilbene dendrons. The first and second generation dendrimers could be spin-coated from solution to form good quality thin films. Incorporation into single layer light-emitting diodes gave red-light emission with maximum external quantum efficiencies of 0.02% and 0.04% for the first and second generation dendrimers respectively. We have determined by photoluminescence studies that energy can be transferred efficiently from the stilbene dendrons to the porphyrin core and that PL emission is from the core. Cyclic voltammetry studies on the dendrimers show that the reductions are porphyrin centred with the dendrons only affecting the rate of heterogeneous electron transfer between the electrode and the dendrimers. This suggests that charge mobility within a dendrimer film in an LED will be affected by the porphyrin edge to porphyrin edge distance. We have studied the hydrodynamic radii of the dendrimers by gel permeation chromatography and found as expected that the average porphyrin edge to dendron edge distance increases with generation This is consistent with the slowing of heterogeneous electron transfer observed in the cyclic voltammetry on increasing the generation number and suggests that the dendrons are interleaved in the solid state to facilitate charge transport

    Incremental Mutual Information: A New Method for Characterizing the Strength and Dynamics of Connections in Neuronal Circuits

    Get PDF
    Understanding the computations performed by neuronal circuits requires characterizing the strength and dynamics of the connections between individual neurons. This characterization is typically achieved by measuring the correlation in the activity of two neurons. We have developed a new measure for studying connectivity in neuronal circuits based on information theory, the incremental mutual information (IMI). By conditioning out the temporal dependencies in the responses of individual neurons before measuring the dependency between them, IMI improves on standard correlation-based measures in several important ways: 1) it has the potential to disambiguate statistical dependencies that reflect the connection between neurons from those caused by other sources (e. g. shared inputs or intrinsic cellular or network mechanisms) provided that the dependencies have appropriate timescales, 2) for the study of early sensory systems, it does not require responses to repeated trials of identical stimulation, and 3) it does not assume that the connection between neurons is linear. We describe the theory and implementation of IMI in detail and demonstrate its utility on experimental recordings from the primate visual system

    Shared Representational Geometry Across Neural Networks

    Full text link
    Different neural networks trained on the same dataset often learn similar input-output mappings with very different weights. Is there some correspondence between these neural network solutions? For linear networks, it has been shown that different instances of the same network architecture encode the same representational similarity matrix, and their neural activity patterns are connected by orthogonal transformations. However, it is unclear if this holds for non-linear networks. Using a shared response model, we show that different neural networks encode the same input examples as different orthogonal transformations of an underlying shared representation. We test this claim using both standard convolutional neural networks and residual networks on CIFAR10 and CIFAR100.Comment: Integration of Deep Learning Theories workshop, NeurIPS 201

    A bio-inspired image coder with temporal scalability

    Full text link
    We present a novel bio-inspired and dynamic coding scheme for static images. Our coder aims at reproducing the main steps of the visual stimulus processing in the mammalian retina taking into account its time behavior. The main novelty of this work is to show how to exploit the time behavior of the retina cells to ensure, in a simple way, scalability and bit allocation. To do so, our main source of inspiration will be the biologically plausible retina model called Virtual Retina. Following a similar structure, our model has two stages. The first stage is an image transform which is performed by the outer layers in the retina. Here it is modelled by filtering the image with a bank of difference of Gaussians with time-delays. The second stage is a time-dependent analog-to-digital conversion which is performed by the inner layers in the retina. Thanks to its conception, our coder enables scalability and bit allocation across time. Also, our decoded images do not show annoying artefacts such as ringing and block effects. As a whole, this article shows how to capture the main properties of a biological system, here the retina, in order to design a new efficient coder.Comment: 12 pages; Advanced Concepts for Intelligent Vision Systems (ACIVS 2011

    Size and emotion or depth and emotion? Evidence, using Matryoshka (Russian) dolls, of children using physical depth as a proxy for emotional charge

    Get PDF
    Background: The size and emotion effect is the tendency for children to draw people and other objects with a positive emotional charge larger than those with a negative or neutral charge. Here we explored the novel idea that drawing size might be acting as a proxy for depth (proximity).Methods: Forty-two children (aged 3-11 years) chose, from 2 sets of Matryoshka (Russian) dolls, a doll to represent a person with positive, negative or neutral charge, which they placed in front of themselves on a sheet of A3 paper. Results: We found that the children used proximity and doll size, to indicate emotional charge. Conclusions: These findings are consistent with the notion that in drawings, children are using size as a proxy for physical closeness (proximity), as they attempt with varying success to put positive charged items closer to, or negative and neutral charge items further away from, themselves

    Receptive Field Inference with Localized Priors

    Get PDF
    The linear receptive field describes a mapping from sensory stimuli to a one-dimensional variable governing a neuron's spike response. However, traditional receptive field estimators such as the spike-triggered average converge slowly and often require large amounts of data. Bayesian methods seek to overcome this problem by biasing estimates towards solutions that are more likely a priori, typically those with small, smooth, or sparse coefficients. Here we introduce a novel Bayesian receptive field estimator designed to incorporate locality, a powerful form of prior information about receptive field structure. The key to our approach is a hierarchical receptive field model that flexibly adapts to localized structure in both spacetime and spatiotemporal frequency, using an inference method known as empirical Bayes. We refer to our method as automatic locality determination (ALD), and show that it can accurately recover various types of smooth, sparse, and localized receptive fields. We apply ALD to neural data from retinal ganglion cells and V1 simple cells, and find it achieves error rates several times lower than standard estimators. Thus, estimates of comparable accuracy can be achieved with substantially less data. Finally, we introduce a computationally efficient Markov Chain Monte Carlo (MCMC) algorithm for fully Bayesian inference under the ALD prior, yielding accurate Bayesian confidence intervals for small or noisy datasets

    Intrinsic gain modulation and adaptive neural coding

    Get PDF
    In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate vs current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio

    Identification of linear and nonlinear sensory processing circuits from spiking neuron data

    Get PDF
    Inferring mathematical models of sensory processing systems directly from input-output observations, while making the fewest assumptions about the model equations and the types of measurements available, is still a major issue in computational neuroscience. This letter introduces two new approaches for identifying sensory circuit models consisting of linear and nonlinear filters in series with spiking neuron models, based only on the sampled analog input to the filter and the recorded spike train output of the spiking neuron. For an ideal integrate-and-fire neuron model, the first algorithm can identify the spiking neuron parameters as well as the structure and parameters of an arbitrary nonlinear filter connected to it. The second algorithm can identify the parameters of the more general leaky integrate-and-fire spiking neuron model, as well as the parameters of an arbitrary linear filter connected to it. Numerical studies involving simulated and real experimental recordings are used to demonstrate the applicability and evaluate the performance of the proposed algorithms

    Stimulus-dependent maximum entropy models of neural population codes

    Get PDF
    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. To be able to infer a model for this distribution from large-scale neural recordings, we introduce a stimulus-dependent maximum entropy (SDME) model---a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. The model is able to capture the single-cell response properties as well as the correlations in neural spiking due to shared stimulus and due to effective neuron-to-neuron connections. Here we show that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. As a result, the SDME model gives a more accurate account of single cell responses and in particular outperforms uncoupled models in reproducing the distributions of codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like surprise and information transmission in a neural population.Comment: 11 pages, 7 figure

    Functional Clustering Drives Encoding Improvement in a Developing Brain Network during Awake Visual Learning

    Get PDF
    Sensory experience drives dramatic structural and functional plasticity in developing neurons. However, for single-neuron plasticity to optimally improve whole-network encoding of sensory information, changes must be coordinated between neurons to ensure a full range of stimuli is efficiently represented. Using two-photon calcium imaging to monitor evoked activity in over 100 neurons simultaneously, we investigate network-level changes in the developing Xenopus laevis tectum during visual training with motion stimuli. Training causes stimulus-specific changes in neuronal responses and interactions, resulting in improved population encoding. This plasticity is spatially structured, increasing tuning curve similarity and interactions among nearby neurons, and decreasing interactions among distant neurons. Training does not improve encoding by single clusters of similarly responding neurons, but improves encoding across clusters, indicating coordinated plasticity across the network. NMDA receptor blockade prevents coordinated plasticity, reduces clustering, and abolishes whole-network encoding improvement. We conclude that NMDA receptors support experience-dependent network self-organization, allowing efficient population coding of a diverse range of stimuli.Canadian Institutes of Health Researc
    corecore