23 research outputs found

    Non-perturbative renormalization group analysis of nonlinear spiking networks

    Full text link
    The critical brain hypothesis posits that neural circuits may operate close to critical points of a phase transition, which has been argued to have functional benefits for neural computation. Theoretical and computational studies arguing for or against criticality in neural dynamics largely rely on establishing power laws or scaling functions of statistical quantities, while a proper understanding of critical phenomena requires a renormalization group (RG) analysis. However, neural activity is typically non-Gaussian, nonlinear, and non-local, rendering models that capture all of these features difficult to study using standard statistical physics techniques. Here, we overcome these issues by adapting the non-perturbative renormalization group (NPRG) to work on (symmetric) network models of stochastic spiking neurons. By deriving a pair of Ward-Takahashi identities and making a ``local potential approximation,'' we are able to calculate non-universal quantities such as the effective firing rate nonlinearity of the network, allowing improved quantitative estimates of network statistics. We also derive the dimensionless flow equation that admits universal critical points in the renormalization group flow of the model, and identify two important types of critical points: in networks with an absorbing state there is Directed Percolation (DP) fixed point corresponding to a non-equilibrium phase transition between sustained activity and extinction of activity, and in spontaneously active networks there is a \emph{complex valued} critical point, corresponding to a spinodal transition observed, e.g., in the Lee-Yang ϕ3\phi^3 model of Ising magnets with explicitly broken symmetry. Our Ward-Takahashi identities imply trivial dynamical exponents z=2z_\ast = 2 in both cases, rendering it unclear whether these critical points fall into the known DP or Ising universality classes

    Temperature Dependence of Interlayer Magnetoresistance in Anisotropic Layered Metals

    Full text link
    Studies of interlayer transport in layered metals have generally made use of zero temperature conductivity expressions to analyze angle-dependent magnetoresistance oscillations (AMRO). However, recent high temperature AMRO experiments have been performed in a regime where the inclusion of finite temperature effects may be required for a quantitative description of the resistivity. We calculate the interlayer conductivity in a layered metal with anisotropic Fermi surface properties allowing for finite temperature effects. We find that resistance maxima are modified by thermal effects much more strongly than resistance minima. We also use our expressions to calculate the interlayer resistivity appropriate to recent AMRO experiments in an overdoped cuprate which led to the conclusion that there is an anisotropic, linear in temperature contribution to the scattering rate and find that this conclusion is robust.Comment: 8 pages, 4 figure

    Universal Quake Statistics: From Compressed Nanocrystals to Earthquakes

    Get PDF
    Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or “quakes”. We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects “tuned critical” behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simple mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stress-dependent cutoff function. The results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes

    Spintronics: Fundamentals and applications

    Get PDF
    Spintronics, or spin electronics, involves the study of active control and manipulation of spin degrees of freedom in solid-state systems. This article reviews the current status of this subject, including both recent advances and well-established results. The primary focus is on the basic physical principles underlying the generation of carrier spin polarization, spin dynamics, and spin-polarized transport in semiconductors and metals. Spin transport differs from charge transport in that spin is a nonconserved quantity in solids due to spin-orbit and hyperfine coupling. The authors discuss in detail spin decoherence mechanisms in metals and semiconductors. Various theories of spin injection and spin-polarized transport are applied to hybrid structures relevant to spin-based devices and fundamental studies of materials properties. Experimental work is reviewed with the emphasis on projected applications, in which external electric and magnetic fields and illumination by light will be used to control spin and charge dynamics to create new functionalities not feasible or ineffective with conventional electronics.Comment: invited review, 36 figures, 900+ references; minor stylistic changes from the published versio

    Strong-coupling Superconductivity in the Cuprate Oxide

    Get PDF
    Superconductivity in the cuprate oxide is studied by Kondo-lattice theory based on the t-J model with the el-ph interaction arising from the modulation of the superexchange interaction by phonons. The self-energy of electrons is decomposed into the single-site and multisite ones. It is proved by using the mapping of the single-site one in the t-J model to its corresponding one in the Anderson model that the single-site self-energy is that of a normal Fermi liquid, even if a superconducting (SC) order parameter appears or the multisite one is anomalous. The electron liquid characterized by the single-site self-energy is a normal Fermi liquid. The Fermi liquid is further stabilized by the RVB mechanism. The stabilized Fermi liquid is a relevant unperturbed state that can be used to study superconductivity and anomalous Fermi-liquid behaviors. The so-called spin-fluctuation-mediated exchange interaction, which includes the superexchange interaction as a part, is the attractive interaction that binds d-wave Cooper pairs. An analysis of the spin susceptibility implies that, because of the el-ph interaction, the imaginary part of the exchange interaction has a sharp peak or dip at \pm\omega^*, where \omega^*\simeq \omega_ph in the normal state and \epsilon_G/2 \lessim \omega^* \lessim \epsilon_G /2+ \omega_ph in the SC state, where \omega_ph is the energy of relevant phonons and \epsilon_G is the SC gap. If the imaginary part has a sharp peak or dip at \pm\omega^*, the dispersion relation of quasi-particles has kink structures near \pm\omega^* above and below the chemical potential, the density of states has dip-and-hump structures near \pm \omega^* outside the coherence peaks in the SC state, and the anisotropy of the gap deviates from the simple d-wave anisotropy.Comment: 19 pages, 12 figure

    25th annual computational neuroscience meeting: CNS-2016

    Get PDF
    The same neuron may play different functional roles in the neural circuits to which it belongs. For example, neurons in the Tritonia pedal ganglia may participate in variable phases of the swim motor rhythms [1]. While such neuronal functional variability is likely to play a major role the delivery of the functionality of neural systems, it is difficult to study it in most nervous systems. We work on the pyloric rhythm network of the crustacean stomatogastric ganglion (STG) [2]. Typically network models of the STG treat neurons of the same functional type as a single model neuron (e.g. PD neurons), assuming the same conductance parameters for these neurons and implying their synchronous firing [3, 4]. However, simultaneous recording of PD neurons shows differences between the timings of spikes of these neurons. This may indicate functional variability of these neurons. Here we modelled separately the two PD neurons of the STG in a multi-neuron model of the pyloric network. Our neuron models comply with known correlations between conductance parameters of ionic currents. Our results reproduce the experimental finding of increasing spike time distance between spikes originating from the two model PD neurons during their synchronised burst phase. The PD neuron with the larger calcium conductance generates its spikes before the other PD neuron. Larger potassium conductance values in the follower neuron imply longer delays between spikes, see Fig. 17.Neuromodulators change the conductance parameters of neurons and maintain the ratios of these parameters [5]. Our results show that such changes may shift the individual contribution of two PD neurons to the PD-phase of the pyloric rhythm altering their functionality within this rhythm. Our work paves the way towards an accessible experimental and computational framework for the analysis of the mechanisms and impact of functional variability of neurons within the neural circuits to which they belong

    Dysregulation of excitatory neural firing replicates physiological and functional changes in aging visual cortex.

    No full text
    The mammalian visual system has been the focus of countless experimental and theoretical studies designed to elucidate principles of neural computation and sensory coding. Most theoretical work has focused on networks intended to reflect developing or mature neural circuitry, in both health and disease. Few computational studies have attempted to model changes that occur in neural circuitry as an organism ages non-pathologically. In this work we contribute to closing this gap, studying how physiological changes correlated with advanced age impact the computational performance of a spiking network model of primary visual cortex (V1). Our results demonstrate that deterioration of homeostatic regulation of excitatory firing, coupled with long-term synaptic plasticity, is a sufficient mechanism to reproduce features of observed physiological and functional changes in neural activity data, specifically declines in inhibition and in selectivity to oriented stimuli. This suggests a potential causality between dysregulation of neuron firing and age-induced changes in brain physiology and functional performance. While this does not rule out deeper underlying causes or other mechanisms that could give rise to these changes, our approach opens new avenues for exploring these underlying mechanisms in greater depth and making predictions for future experiments

    Predicting how and when hidden neurons skew measured synaptic interactions.

    No full text
    A major obstacle to understanding neural coding and computation is the fact that experimental recordings typically sample only a small fraction of the neurons in a circuit. Measured neural properties are skewed by interactions between recorded neurons and the "hidden" portion of the network. To properly interpret neural data and determine how biological structure gives rise to neural circuit function, we thus need a better understanding of the relationships between measured effective neural properties and the true underlying physiological properties. Here, we focus on how the effective spatiotemporal dynamics of the synaptic interactions between neurons are reshaped by coupling to unobserved neurons. We find that the effective interactions from a pre-synaptic neuron r' to a post-synaptic neuron r can be decomposed into a sum of the true interaction from r' to r plus corrections from every directed path from r' to r through unobserved neurons. Importantly, the resulting formula reveals when the hidden units have-or do not have-major effects on reshaping the interactions among observed neurons. As a particular example of interest, we derive a formula for the impact of hidden units in random networks with "strong" coupling-connection weights that scale with [Formula: see text], where N is the network size, precisely the scaling observed in recent experiments. With this quantitative relationship between measured and true interactions, we can study how network properties shape effective interactions, which properties are relevant for neural computations, and how to manipulate effective interactions

    How Do Efficient Coding Strategies Depend on Origins of Noise in Neural Circuits?

    No full text
    <div><p>Neural circuits reliably encode and transmit signals despite the presence of noise at multiple stages of processing. The efficient coding hypothesis, a guiding principle in computational neuroscience, suggests that a neuron or population of neurons allocates its limited range of responses as efficiently as possible to best encode inputs while mitigating the effects of noise. Previous work on this question relies on specific assumptions about where noise enters a circuit, limiting the generality of the resulting conclusions. Here we systematically investigate how noise introduced at different stages of neural processing impacts optimal coding strategies. Using simulations and a flexible analytical approach, we show how these strategies depend on the strength of each noise source, revealing under what conditions the different noise sources have competing or complementary effects. We draw two primary conclusions: (1) differences in encoding strategies between sensory systems—or even adaptational changes in encoding properties within a given system—may be produced by changes in the structure or location of neural noise, and (2) characterization of both circuit nonlinearities as well as noise are necessary to evaluate whether a circuit is performing efficiently.</p></div

    Responses produced by optimal (left column) and suboptimal (right column) nonlinearities.

    No full text
    <p>Each row shows a different set of noise conditions in which a single source of noise is dominant (i.e., upstream noise dominates in panels <b>A</b> and <b>B</b>, Poisson noise in <b>C</b> and <b>D</b>, and downstream noise in <b>E</b> and <b>F</b>). Markers show 1,000 points randomly selected from the stimulus distribution (bottom subpanels) and the corresponding responses that are produced by the nonlinearity (solid line). Different nonlinearities produce very different response distributions (left subpanels). These particular suboptimal nonlinearities are chosen for illustrative purposes, to highlight qualitative features of the optimal nonlinearities.</p
    corecore