38 research outputs found

    Phase transitions in biological membranes

    Full text link
    Native membranes of biological cells display melting transitions of their lipids at a temperature of 10-20 degrees below body temperature. Such transitions can be observed in various bacterial cells, in nerves, in cancer cells, but also in lung surfactant. It seems as if the presence of transitions slightly below physiological temperature is a generic property of most cells. They are important because they influence many physical properties of the membranes. At the transition temperature, membranes display a larger permeability that is accompanied by ion-channel-like phenomena even in the complete absence of proteins. Membranes are softer, which implies that phenomena such as endocytosis and exocytosis are facilitated. Mechanical signal propagation phenomena related to nerve pulses are strongly enhanced. The position of transitions can be affected by changes in temperature, pressure, pH and salt concentration or by the presence of anesthetics. Thus, even at physiological temperature, these transitions are of relevance. There position and thereby the physical properties of the membrane can be controlled by changes in the intensive thermodynamic variables. Here, we review some of the experimental findings and the thermodynamics that describes the control of the membrane function.Comment: 23 pages, 15 figure

    Homeostatic Scaling of Excitability in Recurrent Neural Networks

    Get PDF
    Neurons adjust their intrinsic excitability when experiencing a persistent change in synaptic drive. This process can prevent neural activity from moving into either a quiescent state or a saturated state in the face of ongoing plasticity, and is thought to promote stability of the network in which neurons reside. However, most neurons are embedded in recurrent networks, which require a delicate balance between excitation and inhibition to maintain network stability. This balance could be disrupted when neurons independently adjust their intrinsic excitability. Here, we study the functioning of activity-dependent homeostatic scaling of intrinsic excitability (HSE) in a recurrent neural network. Using both simulations of a recurrent network consisting of excitatory and inhibitory neurons that implement HSE, and a mean-field description of adapting excitatory and inhibitory populations, we show that the stability of such adapting networks critically depends on the relationship between the adaptation time scales of both neuron populations. In a stable adapting network, HSE can keep all neurons functioning within their dynamic range, while the network is undergoing several (patho)physiologically relevant types of plasticity, such as persistent changes in external drive, changes in connection strengths, or the loss of inhibitory cells from the network. However, HSE cannot prevent the unstable network dynamics that result when, due to such plasticity, recurrent excitation in the network becomes too strong compared to feedback inhibition. This suggests that keeping a neural network in a stable and functional state requires the coordination of distinct homeostatic mechanisms that operate not only by adjusting neural excitability, but also by controlling network connectivity

    When Two Become One: The Limits of Causality Analysis of Brain Dynamics

    Get PDF
    Biological systems often consist of multiple interacting subsystems, the brain being a prominent example. To understand the functions of such systems it is important to analyze if and how the subsystems interact and to describe the effect of these interactions. In this work we investigate the extent to which the cause-and-effect framework is applicable to such interacting subsystems. We base our work on a standard notion of causal effects and define a new concept called natural causal effect. This new concept takes into account that when studying interactions in biological systems, one is often not interested in the effect of perturbations that alter the dynamics. The interest is instead in how the causal connections participate in the generation of the observed natural dynamics. We identify the constraints on the structure of the causal connections that determine the existence of natural causal effects. In particular, we show that the influence of the causal connections on the natural dynamics of the system often cannot be analyzed in terms of the causal effect of one subsystem on another. Only when the causing subsystem is autonomous with respect to the rest can this interpretation be made. We note that subsystems in the brain are often bidirectionally connected, which means that interactions rarely should be quantified in terms of cause-and-effect. We furthermore introduce a framework for how natural causal effects can be characterized when they exist. Our work also has important consequences for the interpretation of other approaches commonly applied to study causality in the brain. Specifically, we discuss how the notion of natural causal effects can be combined with Granger causality and Dynamic Causal Modeling (DCM). Our results are generic and the concept of natural causal effects is relevant in all areas where the effects of interactions between subsystems are of interest

    A Structured Model of Video Reproduces Primary Visual Cortical Organisation

    Get PDF
    The visual system must learn to infer the presence of objects and features in the world from the images it encounters, and as such it must, either implicitly or explicitly, model the way these elements interact to create the image. Do the response properties of cells in the mammalian visual system reflect this constraint? To address this question, we constructed a probabilistic model in which the identity and attributes of simple visual elements were represented explicitly and learnt the parameters of this model from unparsed, natural video sequences. After learning, the behaviour and grouping of variables in the probabilistic model corresponded closely to functional and anatomical properties of simple and complex cells in the primary visual cortex (V1). In particular, feature identity variables were activated in a way that resembled the activity of complex cells, while feature attribute variables responded much like simple cells. Furthermore, the grouping of the attributes within the model closely parallelled the reported anatomical grouping of simple cells in cat V1. Thus, this generative model makes explicit an interpretation of complex and simple cells as elements in the segmentation of a visual scene into basic independent features, along with a parametrisation of their moment-by-moment appearances. We speculate that such a segmentation may form the initial stage of a hierarchical system that progressively separates the identity and appearance of more articulated visual elements, culminating in view-invariant object recognition

    On the Origin of the Functional Architecture of the Cortex

    Get PDF
    The basic structure of receptive fields and functional maps in primary visual cortex is established without exposure to normal sensory experience and before the onset of the critical period. How the brain wires these circuits in the early stages of development remains unknown. Possible explanations include activity-dependent mechanisms driven by spontaneous activity in the retina and thalamus, and molecular guidance orchestrating thalamo-cortical connections on a fine spatial scale. Here I propose an alternative hypothesis: the blueprint for receptive fields, feature maps, and their inter-relationships may reside in the layout of the retinal ganglion cell mosaics along with a simple statistical connectivity scheme dictating the wiring between thalamus and cortex. The model is shown to account for a number of experimental findings, including the relationship between retinotopy, orientation maps, spatial frequency maps and cytochrome oxidase patches. The theory's simplicity, explanatory and predictive power makes it a serious candidate for the origin of the functional architecture of primary visual cortex

    A Functional Architecture of Optic Flow in the Inferior Parietal Lobule of the Behaving Monkey

    Get PDF
    The representation of navigational optic flow across the inferior parietal lobule was assessed using optical imaging of intrinsic signals in behaving monkeys. The exposed cortex, corresponding to the dorsal-most portion of areas 7a and dorsal prelunate (DP), was imaged in two hemispheres of two rhesus monkeys. The monkeys actively attended to changes in motion stimuli while fixating. Radial expansion and contraction, and rotation clockwise and counter-clockwise optic flow stimuli were presented concentric to the fixation point at two angles of gaze to assess the interrelationship between the eye position and optic flow signal. The cortical response depended upon the type of flow and was modulated by eye position. The optic flow selectivity was embedded in a patchy architecture within the gain field architecture. All four optic flow stimuli tested were represented in areas 7a and DP. The location of the patches varied across days. However the spatial periodicity of the patches remained constant across days at ∼950 and 1100 µm for the two animals examined. These optical recordings agree with previous electrophysiological studies of area 7a, and provide new evidence for flow selectivity in DP and a fine scale description of its cortical topography. That the functional architectures for optic flow can change over time was unexpected. These and earlier results also from inferior parietal lobule support the inclusion of both static and dynamic functional architectures that define association cortical areas and ultimately support complex cognitive function

    Audiotactile interactions in temporal perception

    Full text link
    corecore