292 research outputs found

    Numerical simulation of a binary communication channel: Comparison between a replica calculation and an exact solution

    Full text link
    The mutual information of a single-layer perceptron with NN Gaussian inputs and PP deterministic binary outputs is studied by numerical simulations. The relevant parameters of the problem are the ratio between the number of output and input units, α=P/N\alpha = P/N, and those describing the two-point correlations between inputs. The main motivation of this work refers to the comparison between the replica computation of the mutual information and an analytical solution valid up to α∼O(1)\alpha \sim O(1). The most relevant results are: (1) the simulation supports the validity of the analytical prediction, and (2) it also verifies a previously proposed conjecture that the replica solution interpolates well between large and small values of α\alpha.Comment: 6 pages, 8 figures, LaTeX fil

    Coordinated population activity underlying texture discrimination in rat barrel cortex

    Get PDF
    Rodents can robustly distinguish fine differences in texture using their whiskers, a capacity that depends on neuronal activity in primary somatosensory \u201cbarrel\u201d cortex. Here we explore how texture was collectively encoded by populations of three to seven neuronal clusters simultaneously recorded from barrel cortex while a rat performed a discrimination task. Each cluster corresponded to the single-unit or multiunit activity recorded at an individual electrode. To learn how the firing of different clusters combines to represent texture, we computed population activity vectors across moving time windows and extracted the signal available in the optimal linear combination of clusters. We quantified this signal using receiver operating characteristic analysis and compared it to that available in single clusters. Texture encoding was heterogeneous across neuronal clusters, and only a minority of clusters carried signals strong enough to support stimulus discrimination on their own. However, jointly recorded groups of clusters were always able to support texture discrimination at a statistically significant level, even in sessions where no individual cluster represented the stimulus. The discriminative capacity of neuronal activity was degraded when error trials were included in the data, compared to only correct trials, suggesting a link between the neuronal activity and the animal's performance. These analyses indicate that small groups of barrel cortex neurons can robustly represent texture identity through synergistic interactions, and suggest that neurons downstream to barrel cortex could extract texture identity on single trials through simple linear combination of barrel cortex responses

    Constructing seasonally adjusted data with time-varying confidence intervals

    Get PDF
    Seasonal adjustment methods transform observed time series data into estimated data, where these estimated data are constructed such that they show no or almost no seasonal variation. An advantage of model-based methods is that these can provide confidence intervals around the seasonally adjusted data. One particularly useful time series model for seasonal adjustment is the basic structural time series [BSM] model. The usual premise of the BSM is that the variance of each of the components is constant. In this paper we address the possibility that the variance of the trend component in a macro-economic time series in some way depends on the business cycle. One reason for doing so is that one can expect that there is more uncertainty in recession periods. We extend the BSM by allowing for a business-cycle dependent variance in the level equation. Next we show how this affects the confidence intervals of seasonally adjusted data. We apply our extended BSM to monthly US unemployment and we show that the estimated confidence intervals for seasonally adjusted unemployment change with past changes in the oil price

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA

    Intrinsic gain modulation and adaptive neural coding

    Get PDF
    In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate vs current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio

    Stimulus Dependence of Barrel Cortex Directional Selectivity

    Get PDF
    Neurons throughout the rat vibrissa somatosensory pathway are sensitive to the angular direction of whisker movement. Could this sensitivity help rats discriminate stimuli? Here we use a simple computational model of cortical neurons to analyze the robustness of directional selectivity. In the model, directional preference emerges from tuning of synaptic conductance amplitude and latency, as in recent experimental findings. We find that directional selectivity during stimulation with random deflection sequences is strongly dependent on the mean deflection frequency: Selectivity is weakened at high frequencies even when each individual deflection evokes strong directional tuning. This variability of directional selectivity is due to generic properties of synaptic integration by the neuronal membrane, and is therefore likely to hold under very general physiological conditions. Our results suggest that directional selectivity depends on stimulus context. It may participate in tasks involving brief whisker contact, such as detection of object position, but is likely to be weakened in tasks involving sustained whisker exploration (e.g., texture discrimination)

    Forecasting binary longitudinal data by a functional PC-ARIMA model

    Get PDF
    In order to forecast time evolution of a binary response variable from a related continuous time series a functional logit model is proposed. The estimation of this model from discrete time observations of the predictor is solved by using functional principal component analysis and ARIMA modelling of the associated discrete time series of principal components. The proposed model is applied to forecast the risk of drought from El Niño phenomenon.Projects MTM2007-63793 from Dirección General de Investigación, Ministerio de Educación y Ciencia, Spain and P06-FQM-01470 from Consejería de Innovación Ciencia y Empresa, Junta de Andalucía, Spai

    A Dynamic Model of Interactions of Ca^(2+), Calmodulin, and Catalytic Subunits of Ca^(2+)/Calmodulin-Dependent Protein Kinase II

    Get PDF
    During the acquisition of memories, influx of Ca^(2+) into the postsynaptic spine through the pores of activated N-methyl-D-aspartate-type glutamate receptors triggers processes that change the strength of excitatory synapses. The pattern of Ca^(2+) influx during the first few seconds of activity is interpreted within the Ca^(2+)-dependent signaling network such that synaptic strength is eventually either potentiated or depressed. Many of the critical signaling enzymes that control synaptic plasticity, including Ca^(2+)/calmodulin-dependent protein kinase II (CaMKII), are regulated by calmodulin, a small protein that can bind up to 4 Ca^(2+) ions. As a first step toward clarifying how the Ca^(2+)-signaling network decides between potentiation or depression, we have created a kinetic model of the interactions of Ca^(2+), calmodulin, and CaMKII that represents our best understanding of the dynamics of these interactions under conditions that resemble those in a postsynaptic spine. We constrained parameters of the model from data in the literature, or from our own measurements, and then predicted time courses of activation and autophosphorylation of CaMKII under a variety of conditions. Simulations showed that species of calmodulin with fewer than four bound Ca^(2+) play a significant role in activation of CaMKII in the physiological regime, supporting the notion that processing ofCa^(2+) signals in a spine involves competition among target enzymes for binding to unsaturated species of CaM in an environment in which the concentration of Ca^(2+) is fluctuating rapidly. Indeed, we showed that dependence of activation on the frequency of Ca^(2+) transients arises from the kinetics of interaction of fluctuating Ca^(2+) with calmodulin/CaMKII complexes. We used parameter sensitivity analysis to identify which parameters will be most beneficial to measure more carefully to improve the accuracy of predictions. This model provides a quantitative base from which to build more complex dynamic models of postsynaptic signal transduction during learning

    Predicting Spike Occurrence and Neuronal Responsiveness from LFPs in Primary Somatosensory Cortex

    Get PDF
    Local Field Potentials (LFPs) integrate multiple neuronal events like synaptic inputs and intracellular potentials. LFP spatiotemporal features are particularly relevant in view of their applications both in research (e.g. for understanding brain rhythms, inter-areal neural communication and neronal coding) and in the clinics (e.g. for improving invasive Brain-Machine Interface devices). However the relation between LFPs and spikes is complex and not fully understood. As spikes represent the fundamental currency of neuronal communication this gap in knowledge strongly limits our comprehension of neuronal phenomena underlying LFPs. We investigated the LFP-spike relation during tactile stimulation in primary somatosensory (S-I) cortex in the rat. First we quantified how reliably LFPs and spikes code for a stimulus occurrence. Then we used the information obtained from our analyses to design a predictive model for spike occurrence based on LFP inputs. The model was endowed with a flexible meta-structure whose exact form, both in parameters and structure, was estimated by using a multi-objective optimization strategy. Our method provided a set of nonlinear simple equations that maximized the match between models and true neurons in terms of spike timings and Peri Stimulus Time Histograms. We found that both LFPs and spikes can code for stimulus occurrence with millisecond precision, showing, however, high variability. Spike patterns were predicted significantly above chance for 75% of the neurons analysed. Crucially, the level of prediction accuracy depended on the reliability in coding for the stimulus occurrence. The best predictions were obtained when both spikes and LFPs were highly responsive to the stimuli. Spike reliability is known to depend on neuron intrinsic properties (i.e. on channel noise) and on spontaneous local network fluctuations. Our results suggest that the latter, measured through the LFP response variability, play a dominant role
    • …
    corecore